Design and Evaluation of Human–Machine Interface for NEXUS: A Custom Microassembly System

2020 ◽  
Vol 8 (4) ◽  
Author(s):  
Danming Wei ◽  
Mariah B. Hall ◽  
Andriy Sherehiy ◽  
Dan O. Popa

Abstract Microassembly systems utilizing precision robotics have long been used for realizing three-dimensional microstructures such as microsystems and microrobots. Prior to assembly, microscale components are fabricated using micro-electromechanical-system (MEMS) technology. The microassembly system then directs a microgripper through a series of automated or human-controlled pick-and-place operations. In this paper, we describe a novel custom microassembly system, named NEXUS, that can be used to prototype MEMS microrobots. The NEXUS integrates multi-degrees-of-freedom (DOF) precision positioners, microscope computer vision, and microscale process tools such as a microgripper and vacuum tip. A semi-autonomous human–machine interface (HMI) was programmed to allow the operator to interact with the microassembly system. The NEXUS human–machine interface includes multiple functions, such as positioning, target detection, visual servoing, and inspection. The microassembly system's HMI was used by operators to assemble various three-dimensional microrobots such as the Solarpede, a novel light-powered stick-and-slip mobile microcrawler. Experimental results are reported in this paper to evaluate the system's semi-autonomous capabilities in terms of assembly rate and yield and compare them to purely teleoperated assembly performance. Results show that the semi-automated capabilities of the microassembly system's HMI offer a more consistent assembly rate of microrobot components and are less reliant on the operator's experience and skill.

Author(s):  
Danming Wei ◽  
Mariah B. Hall ◽  
Andriy Sherehiy ◽  
Sumit Kumar Das ◽  
Dan O. Popa

Abstract Microassembly systems utilizing precision robotics have long been used for realizing 3-dimensional microstructures such as microrobots. Prior to assembly, such components are fabricated using Micro-Electro-Mechanical-System (MEMS) technology. The microassembly system then directs a microgripper through automated or human-controlled pick-and-place operations. In this paper, we describe a novel custom microassembly system, named NEXUS. The NEXUS integrates multi-degree of freedom (DOF) precision positioners, microscope computer vision, and micro-scale process tools such as a microgripper and vacuum tip. A semi-autonomous human-machine interface (HMI) is programmed by NI LabVIEW® to allow the operator to interact with the microassembly system. The NEXUS human-machine interface includes multiple functions, such as positioning, target detection, visual servoing, and inspection. The microassembly system’s HMI was used by operators to assemble various 3-dimensional microrobots such as the Solarpede, a novel light-powered stick-and-slip mobile microcrawler. Experimental results are reported in this paper that evaluate the system’s semi-autonomous capabilities in terms of assembly rate and yield and compare them to purely teleoperated assembly performance. Results show that the semi-automated capabilities of the microassembly system’s HMI offer a more consistent assembly rate of microrobot components.


2016 ◽  
Vol 39 (7) ◽  
pp. 1037-1046 ◽  
Author(s):  
Hossein Nourmohammadi ◽  
Jafar Keighobadi ◽  
Mohsen Bahrami

Biomedical applications of swimming microrobots comprising of drug delivery, microsurgery and disease monitoring make the research more interesting in MEMS technology. In this paper, inspired by the flagellar motion of microorganisms like bacteria and also considering the recent attempts in one/two-dimensional modelling of swimming microrobots, a three degrees-of-freedom swimming microrobot is developed. In the proposed design, the body of the swimming microrobot is driven by multiple prokaryotic flagella which produce a propulsion force through rotating in the fluid media. The presented swimming microrobot has the capability of doing three-dimensional manoeuvres and moving along three-dimensional reference paths. In this paper, following dynamical modelling of the microrobot motion, a suitable controller is designed for path tracking purposes. Based on the resistive-force theory, the generated propulsion force by the flagella is modelled. The feedback linearization method is applied for perfect tracking control of the swimming microrobot on the desired motion trajectories. It is seen that, by the use of three flagella, the microrobot is able to perform three-dimensional manoeuvres. From the simulation results, the tracking performance of the designed control system is perfectly guaranteed which enables the microrobot to perform the desired three-dimensional manoeuvres and follow the desired trajectory.


2021 ◽  
Vol 13 (2) ◽  
pp. 71-78
Author(s):  
Awang Noor Indra Wardana ◽  
Yahya Bachtiar ◽  
M Bobby Andriansyah ◽  
Rifdahlia Salma

Process industries such as oil refineries, petrochemical plants, and power plants require a human-machine interface system to monitor continuously. The operator usually carries out monitoring via a human-machine interface. However, it is difficult to know the condition of process equipment in real-time. The implementation of augmented reality allows engineers to visualize process equipment in real-time when conducting field inspections. The implementation of augmented reality at the human-machine interface to the fluid catalytic cracking process in an oil refinery is discussed in this paper. The design was started by developing a three-dimensional process equipment model using Autodesk Inventor. The result of the three-dimensional model then using Unity 3D software connected to the Vuforia Engine was implemented on a gadget into an augmented reality application. Data communication performance analysis was carried out using inferential statistics methods to test variations in service quality at levels 0, 1, and 2. The result of the Tukey test showed that the communication network latency value in level 2 was significantly higher than levels 0 and 1, which was 0.704±0.108 seconds. These results indicate that augmented reality can be implemented on human-machine interfaces by ensuring the quality of data communication services using Message Queue Telemetry Transport (MQTT) protocol at levels 0 or 1.


Robotica ◽  
1995 ◽  
Vol 13 (1) ◽  
pp. 87-94 ◽  
Author(s):  
Lindsay Kleeman

SummaryA novel design of a three dimensional localiser intended for autonomous robot vehicles is presented. A prototype is implemented in air using ultrasonic beacons at known positions, and can be adapted to underwater environments where it has important applications, such as deep sea maintenance, data collection and reconnaissance tasks. The paper presents the hardware design, algorithms for position and orientation determination (six degrees of freedom), and performance results of a laboratory prototype. Two approaches are discussed for position and orientation determination – (i) fast single measurement set techniques and (ii) computationally slower Kalman filter based techniques. The Kalman filter approach allows the incorporation of robot motion information, more accurate beacon modelling and the capability of processing data from more than four beacons, the minimum number required for localisation.


Author(s):  
Jingshu Liu ◽  
◽  
Yuan Li

We propose a visual servoing (VS) approach with deep learning to perform precise, robust, and real-time six degrees of freedom (6DOF) control of robotic manipulation to ease the extraction of image features and estimate the nonlinear relationship between the two-dimensional image space and the three-dimensional Cartesian space in traditional VS tasks. Owing to the superior learning capabilities of convolutional neural networks (CNNs), autonomous learning to select and extract image features from images and fitting the nonlinear mapping is achieved. A method for designing and generating a dataset from few or one image, by simulating the motion of an eye-in-hand robotic system is described herein. Therefore, network training requiring a large amount of data and difficult data collection occurring in actual situations can be solved. A dataset is utilized to train our VS convolutional neural network. Subsequently, a two-stream network is designed and the corresponding control approach is presented. This method converges robustly with the experimental results, in that the position error is less than 3 mm and the rotation error is less than 2.5° on average.


1990 ◽  
Author(s):  
B. Bly ◽  
P. J. Price ◽  
S. Park ◽  
S. Tepper ◽  
E. Jackson ◽  
...  

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 687
Author(s):  
Jinzhen Dou ◽  
Shanguang Chen ◽  
Zhi Tang ◽  
Chang Xu ◽  
Chengqi Xue

With the development and promotion of driverless technology, researchers are focusing on designing varied types of external interfaces to induce trust in road users towards this new technology. In this paper, we investigated the effectiveness of a multimodal external human–machine interface (eHMI) for driverless vehicles in virtual environment, focusing on a two-way road scenario. Three phases of identifying, decelerating, and parking were taken into account in the driverless vehicles to pedestrian interaction process. Twelve eHMIs are proposed, which consist of three visual features (smile, arrow and none), three audible features (human voice, warning sound and none) and two physical features (yielding and not yielding). We conducted a study to gain a more efficient and safer eHMI for driverless vehicles when they interact with pedestrians. Based on study outcomes, in the case of yielding, the interaction efficiency and pedestrian safety in multimodal eHMI design was satisfactory compared to the single-modal system. The visual modality in the eHMI of driverless vehicles has the greatest impact on pedestrian safety. In addition, the “arrow” was more intuitive to identify than the “smile” in terms of visual modality.


Sign in / Sign up

Export Citation Format

Share Document