CNN architecture for robotic arm control in a 3D virtual environment by means of by means of EMG signals

2017 ◽  
Vol 10 (28) ◽  
pp. 1377-1390 ◽  
Author(s):  
Natalie Segura Velandia ◽  
Robinson Jimenez Moreno ◽  
Ruben Dario Hernandez

This paper presents the development of a 3D virtual environment to validate the effectiveness of a Convolutional Neural Network (CNN) in a virtual application, controlling the movements of a manipulator or robotic arm through commands recognized by the network. The architecture of the CNN network was designed to recognize five (5) gestures by means of electromyography signals (EMGs) captured by surface electrodes located on the forearm and processed by the Wavelet Packet Transform (WPT). In addition to this, the environment consists of a manipulator of 3 degrees of freedom with a final effector type clamp and three objects to move from one place to another. Finally, the network reaches a degree of accuracy of 97.17% and the tests that were performed reached an average accuracy of 98.95%.

2020 ◽  
Vol 14 ◽  
Author(s):  
Yuanlu Zhu ◽  
Ying Li ◽  
Jinling Lu ◽  
Pengcheng Li

Brain-computer interface (BCI) for robotic arm control has been studied to improve the life quality of people with severe motor disabilities. There are still challenges for robotic arm control in accomplishing a complex task with a series of actions. An efficient switch and a timely cancel command are helpful in the application of robotic arm. Based on the above, we proposed an asynchronous hybrid BCI in this study. The basic control of a robotic arm with six degrees of freedom was a steady-state visual evoked potential (SSVEP) based BCI with fifteen target classes. We designed an EOG-based switch which used a triple blink to either activate or deactivate the flash of SSVEP-based BCI. Stopping flash in the idle state can help to reduce visual fatigue and false activation rate (FAR). Additionally, users were allowed to cancel the current command simply by a wink in the feedback phase to avoid executing the incorrect command. Fifteen subjects participated and completed the experiments. The cue-based experiment obtained an average accuracy of 92.09%, and the information transfer rates (ITR) resulted in 35.98 bits/min. The mean FAR of the switch was 0.01/min. Furthermore, all subjects succeeded in asynchronously operating the robotic arm to grasp, lift, and move a target object from the initial position to a specific location. The results indicated the feasibility of the combination of EOG and SSVEP signals and the flexibility of EOG signal in BCI to complete a complicated task of robotic arm control.


2007 ◽  
Vol 04 (04) ◽  
pp. 645-670 ◽  
Author(s):  
QUANJUN SONG ◽  
YONG YU ◽  
YUNJIAN GE ◽  
ZHEN GAO ◽  
HUANGHUAN SHEN ◽  
...  

An EMG-driven Arm Wrestling Robot (AWR) is developed for the purposes of studying neuromuscular control of human elbow movements. The AWR arm has two degrees-of-freedom, integrated with mechanical arm, elbow/wrist force sensors, servo motors, encoders, MEMS accelerometers, and a USB camera, and is used to estimate tension generated by individual muscles from recorded electromyograms (EMG). The surface electromyography signal from the upper limb is sampled from a real player in the same conditions. By using the method of wavelet packet transformation (WPT) and autoregressive model (AR), the characteristics of EMG signals can be extracted. Then, an artificial neural network is adopted to estimate the elbow joint force. The effectiveness of the control method using force control estimated via neural network using WPT and AR as inputs is confirmed by experiments. The purpose of this paper is to describe the design objectives, fundamental components, and implementation of our real-time, EMG-driven AWR arm.


2021 ◽  
pp. 004051752098752
Author(s):  
Zhujun Wang ◽  
Jianping Wang ◽  
Xianyi Zeng ◽  
Shukla Sharma ◽  
Yingmei Xing ◽  
...  

This paper proposes a probabilistic neural network-based model for predicting and controlling garment fit levels from garment ease allowances, digital pressures, and fabric mechanical properties measured in a three-dimensional (3D) virtual environment. The predicted fit levels include both comprehensive and local fit levels. The model was set up by learning from data measured during a series of virtual (input data) and real try-on (output data) experiments and then simulated to predict different garment styles, for example, loose and tight fits. Finally, the performance of the proposed model was compared with the Linear Regression model, the Support Vector Machine model, the Radial Basis Function Artificial Neural Network model, and the Back Propagation Artificial Neural Network model. The results of the comparison revealed that the prediction accuracy of the proposed model was superior to those of the other models. Furthermore, we put forward a new interactive garment design process in a 3D virtual environment based on the proposed model. Based on interactions between real pattern adjustments and virtual garment demonstrations, this new design process will enable designers to rapidly, accurately, and automatically predict relevant garment fit levels without undertaking expensive and time-consuming real try-ons.


2021 ◽  
Vol 11 (17) ◽  
pp. 7917
Author(s):  
Hiba Sekkat ◽  
Smail Tigani ◽  
Rachid Saadane ◽  
Abdellah Chehri

While working side-by-side, humans and robots complete each other nowadays, and we may say that they work hand in hand. This study aims to evolve the grasping task by reaching the intended object based on deep reinforcement learning. Thereby, in this paper, we propose a deep deterministic policy gradient approach that can be applied to a numerous-degrees-of-freedom robotic arm towards autonomous objects grasping according to their classification and a given task. In this study, this approach is realized by a five-degrees-of-freedom robotic arm that reaches the targeted object using the inverse kinematics method. You Only Look Once v5 is employed for object detection, and backward projection is used to detect the three-dimensional position of the target. After computing the angles of the joints at the detected position by inverse kinematics, the robot’s arm is moved towards the target object’s emplacement thanks to the algorithm. Our approach provides a neural inverse kinematics solution that increases overall performance, and its simulation results reveal its advantages compared to the traditional one. The robot’s end grip joint can reach the targeted location by calculating the angle of every joint with an acceptable range of error. However, the accuracy of the angle and the posture are satisfied. Experiments reveal the performance of our proposal compared to the state-of-the-art approaches in vision-based grasp tasks. This is a new approach to grasp an object by referring to inverse kinematics. This method is not only easier than the standard one but is also more meaningful for multi-degrees of freedom robots.


2012 ◽  
Vol 6 (1) ◽  
pp. 5-15 ◽  
Author(s):  
Michael R Dawson ◽  
Farbod Fahimi ◽  
Jason P Carey

The objective of above-elbow myoelectric prostheses is to reestablish the functionality of missing limbs and increase the quality of life of amputees. By using electromyography (EMG) electrodes attached to the surface of the skin, amputees are able to control motors in myoelectric prostheses by voluntarily contracting the muscles of their residual limb. This work describes the development of an inexpensive myoelectric training tool (MTT) designed to help upper limb amputees learn how to use myoelectric technology in advance of receiving their actual myoelectric prosthesis. The training tool consists of a physical and simulated robotic arm, signal acquisition hardware, controller software, and a graphical user interface. The MTT improves over earlier training systems by allowing a targeted muscle reinnervation (TMR) patient to control up to two degrees of freedom simultaneously. The training tool has also been designed to function as a research prototype for novel myoelectric controllers. A preliminary experiment was performed in order to evaluate the effectiveness of the MTT as a learning tool and to identify any issues with the system. Five able-bodied participants performed a motor-learning task using the EMG controlled robotic arm with the goal of moving five balls from one box to another as quickly as possible. The results indicate that the subjects improved their skill in myoelectric control over the course of the trials. A usability survey was administered to the subjects after their trials. Results from the survey showed that the shoulder degree of freedom was the most difficult to control.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Mohammed Aliy Mohammed ◽  
Fetulhak Abdurahman ◽  
Yodit Abebe Ayalew

Abstract Background Automating cytology-based cervical cancer screening could alleviate the shortage of skilled pathologists in developing countries. Up until now, computer vision experts have attempted numerous semi and fully automated approaches to address the need. Yet, these days, leveraging the astonishing accuracy and reproducibility of deep neural networks has become common among computer vision experts. In this regard, the purpose of this study is to classify single-cell Pap smear (cytology) images using pre-trained deep convolutional neural network (DCNN) image classifiers. We have fine-tuned the top ten pre-trained DCNN image classifiers and evaluated them using five class single-cell Pap smear images from SIPaKMeD dataset. The pre-trained DCNN image classifiers were selected from Keras Applications based on their top 1% accuracy. Results Our experimental result demonstrated that from the selected top-ten pre-trained DCNN image classifiers DenseNet169 outperformed with an average accuracy, precision, recall, and F1-score of 0.990, 0.974, 0.974, and 0.974, respectively. Moreover, it dashed the benchmark accuracy proposed by the creators of the dataset with 3.70%. Conclusions Even though the size of DenseNet169 is small compared to the experimented pre-trained DCNN image classifiers, yet, it is not suitable for mobile or edge devices. Further experimentation with mobile or small-size DCNN image classifiers is required to extend the applicability of the models in real-world demands. In addition, since all experiments used the SIPaKMeD dataset, additional experiments will be needed using new datasets to enhance the generalizability of the models.


Sign in / Sign up

Export Citation Format

Share Document