scholarly journals Hand Motion-Aware Surgical Tool Localization and Classification from an Egocentric Camera

2021 ◽  
Vol 7 (2) ◽  
pp. 15
Author(s):  
Tomohiro Shimizu ◽  
Ryo Hachiuma ◽  
Hiroki Kajita ◽  
Yoshifumi Takatsume ◽  
Hideo Saito

Detecting surgical tools is an essential task for the analysis and evaluation of surgical videos. However, in open surgery such as plastic surgery, it is difficult to detect them because there are surgical tools with similar shapes, such as scissors and needle holders. Unlike endoscopic surgery, the tips of the tools are often hidden in the operating field and are not captured clearly due to low camera resolution, whereas the movements of the tools and hands can be captured. As a result that the different uses of each tool require different hand movements, it is possible to use hand movement data to classify the two types of tools. We combined three modules for localization, selection, and classification, for the detection of the two tools. In the localization module, we employed the Faster R-CNN to detect surgical tools and target hands, and in the classification module, we extracted hand movement information by combining ResNet-18 and LSTM to classify two tools. We created a dataset in which seven different types of open surgery were recorded, and we provided the annotation of surgical tool detection. Our experiments show that our approach successfully detected the two different tools and outperformed the two baseline methods.

2017 ◽  
Vol 29 (5) ◽  
pp. 919-927 ◽  
Author(s):  
Ngoc Hung Pham ◽  
◽  
Takashi Yoshimi

This paper describes a process for adaptive learning of hand movements in human demonstration for manipulation actions by robots using Dynamic Movement Primitives (DMPs) framework. The process includes 1) tracking hand movement from human demonstration, 2) segmenting hand movement, 3) adaptive learning with DMPs framework. We implement a extended DMPs model with a modified formulation for hand movement data observed from human demonstration including hand 3D position, orientation and fingers distance. We evaluate the generated movements by DMPs model which is reproduced without changes or adapted to change of goal of the movement. The adapted movement data is used to control a robot arm by spatial position and orientation of its end-effector with a parallel gripper.


2018 ◽  
Vol 11 (6) ◽  
Author(s):  
Damla Topalli ◽  
Nergiz Ercil Cagiltay

Endoscopic surgery procedures require specific skills, such as eye-hand coordination to be developed. Current education programs are facing with problems to provide appropriate skill improvement and assessment methods in this field. This study aims to propose objective metrics for hand-movement skills and assess eye-hand coordination. An experimental study is conducted with 15 surgical residents to test the newly proposed measures. Two computer-based both-handed endoscopic surgery practice scenarios are developed in a simulation environment to gather the participants’ eye-gaze data with the help of an eye tracker as well as the related hand movement data through haptic interfaces. Additionally, participants’ eye-hand coordination skills are analyzed. The results indicate higher correlations in the intermediates’ eye-hand movements compared to the novices. An increase in intermediates’ visual concentration leads to smoother hand movements. Similarly, the novices’ hand movements are shown to remain at a standstill. After the first round of practice, all participants’ eye-hand coordination skills are improved on the specific task targeted in this study. According to these results, it can be concluded that the proposed metrics can potentially provide some additional insights about trainees’ eye-hand coordination skills and help instructional system designers to better address training requirements.


Fractals ◽  
2019 ◽  
Vol 27 (04) ◽  
pp. 1950042 ◽  
Author(s):  
HAMIDREZA NAMAZI ◽  
SAJAD JAFARI

Analysis of body movement is the most important aspect of rehabilitation science. Hand movement as one of the major movements of humans has aroused the attention of many researchers. For this purpose, decoding of movements by analysis of the related bio signals is very important. In this research, complexity analysis of Electromyography (EMG) signal that was recorded due to simple hand movements is done. For this purpose, we employ fractal dimension as the indicator of complexity of signal in this research. The EMG signal was recorded from subjects while they did six simple hand movements and accordingly we applied fractal analysis on the signal. The result of our analysis showed that the EMG signal has the greatest and lowest fractal dimension in case of lateral (for holding thin, flat objects) and hook (for supporting a heavy load) hand movements. The capability seen in this research can be applied to the analysis of other types of bio signals in order to investigate the reaction of humans to different types of stimuli.


Author(s):  
Xudong Zhang ◽  
Don B. Chaffin

This paper presents a new method to empirically investigate the effects of task factors on three-dimensional (3D) dynamic postures during seated reaching movements. The method relies on a statistical model in which the effects of hand location and those of various task factors on dynamic postures are distinguished. Two statistical procedures are incorporated: a regression description of the relationship between the time-varying hand location and postural profiles to compress the movement data, and a series of analyses of variance to test the hypothesized task effects using instantaneous postures with prescribed hand locations as dependent measures. The use of this method is illustrated by an experiment which examines two generic task factors: 1) hand movement direction, and 2) motion completion time. The results suggest that the hand motion direction is a significant task factor and should be included as an important attribute when describing or modeling instantaneous postures. It was also found that the time to complete a motion under a self-paced mode was significantly different from a motivated mode, but the time difference did not significantly affect instantaneous postures. The concept of an instantaneous posture and its usage in dynamic studies of movements are discussed. Some understanding of human postural control as well as the implications for developing a general dynamic posture prediction model also are presented.


2020 ◽  
Vol 132 (5) ◽  
pp. 1358-1366
Author(s):  
Chao-Hung Kuo ◽  
Timothy M. Blakely ◽  
Jeremiah D. Wander ◽  
Devapratim Sarma ◽  
Jing Wu ◽  
...  

OBJECTIVEThe activation of the sensorimotor cortex as measured by electrocorticographic (ECoG) signals has been correlated with contralateral hand movements in humans, as precisely as the level of individual digits. However, the relationship between individual and multiple synergistic finger movements and the neural signal as detected by ECoG has not been fully explored. The authors used intraoperative high-resolution micro-ECoG (µECoG) on the sensorimotor cortex to link neural signals to finger movements across several context-specific motor tasks.METHODSThree neurosurgical patients with cortical lesions over eloquent regions participated. During awake craniotomy, a sensorimotor cortex area of hand movement was localized by high-frequency responses measured by an 8 × 8 µECoG grid of 3-mm interelectrode spacing. Patients performed a flexion movement of the thumb or index finger, or a pinch movement of both, based on a visual cue. High-gamma (HG; 70–230 Hz) filtered µECoG was used to identify dominant electrodes associated with thumb and index movement. Hand movements were recorded by a dataglove simultaneously with µECoG recording.RESULTSIn all 3 patients, the electrodes controlling thumb and index finger movements were identifiable approximately 3–6-mm apart by the HG-filtered µECoG signal. For HG power of cortical activation measured with µECoG, the thumb and index signals in the pinch movement were similar to those observed during thumb-only and index-only movement, respectively (all p > 0.05). Index finger movements, measured by the dataglove joint angles, were similar in both the index-only and pinch movements (p > 0.05). However, despite similar activation across the conditions, markedly decreased thumb movement was observed in pinch relative to independent thumb-only movement (all p < 0.05).CONCLUSIONSHG-filtered µECoG signals effectively identify dominant regions associated with thumb and index finger movement. For pinch, the µECoG signal comprises a combination of the signals from individual thumb and index movements. However, while the relationship between the index finger joint angle and HG-filtered signal remains consistent between conditions, there is not a fixed relationship for thumb movement. Although the HG-filtered µECoG signal is similar in both thumb-only and pinch conditions, the actual thumb movement is markedly smaller in the pinch condition than in the thumb-only condition. This implies a nonlinear relationship between the cortical signal and the motor output for some, but importantly not all, movement types. This analysis provides insight into the tuning of the motor cortex toward specific types of motor behaviors.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrea Cavallo ◽  
Luca Romeo ◽  
Caterina Ansuini ◽  
Francesca Battaglia ◽  
Lino Nobili ◽  
...  

AbstractFailure to develop prospective motor control has been proposed to be a core phenotypic marker of autism spectrum disorders (ASD). However, whether genuine differences in prospective motor control permit discriminating between ASD and non-ASD profiles over and above individual differences in motor output remains unclear. Here, we combined high precision measures of hand movement kinematics and rigorous machine learning analyses to determine the true power of prospective movement data to differentiate children with autism and typically developing children. Our results show that while movement is unique to each individual, variations in the kinematic patterning of sequential grasping movements genuinely differentiate children with autism from typically developing children. These findings provide quantitative evidence for a prospective motor control impairment in autism and indicate the potential to draw inferences about autism on the basis of movement kinematics.


Author(s):  
JAMES DAVIS ◽  
MUBARAK SHAH

This paper presents a glove-free method for tracking hand movements using a set of 3-D models. In this approach, the hand is represented by five cylindrical models which are fit to the third phalangeal segments of the fingers. Six 3-D motion parameters for each model are calculated that correspond to the movement of the fingertips in the image plane. Trajectories of the moving models are then established to show the 3-D nature of the hand motion.


1979 ◽  
Vol 48 (1) ◽  
pp. 207-214 ◽  
Author(s):  
Luis R. Marcos

16 subordinate bilingual subjects produced 5-min. monologues in their nondominant languages, i.e., English or Spanish. Hand-movement activity manifested during the videotape monologues was scored and related to measures of fluency in the nondominant language. The hand-movement behavior categorized as Groping Movement was significantly related to all of the nondominant-language fluency measures. These correlations support the assumption that Groping Movement may have a function in the process of verbal encoding. The results are discussed in terms of the possibility of monitoring central cognitive processes through the study of “visible” motor behavior.


2018 ◽  
Vol 15 (5) ◽  
pp. 172988141880213 ◽  
Author(s):  
Yuanfang Wan ◽  
Zishan Han ◽  
Jun Zhong ◽  
Guohua Chen

With the development of robotics, intelligent neuroprosthesis for amputees is more concerned. Research of robot controlling based on electrocardiogram, electromyography, and electroencephalogram is a hot spot. In medical research, electrode arrays are commonly used as sensors for surface electromyograms. Although these sensors collect more accurate data and sampling at higher frequencies, they have no advantage in terms of portability and ease of use. In recent years, there are also some small surface electromyography sensors for research. The portability of the sensor and the calculation speed of the calculation method directly affect the development of the bionic prosthesis. A consumer-grade surface electromyography device is selected as surface electromyography sensor in this study. We first proposed a data structure to convert raw surface electromyography signals from an array structure into a matrix structure (we called it surface electromyography graph). Then, a convolutional neural network was used to classify it. Discrete surface electromyography signals recorded from three persons 14 gestures (widely used in other research to evaluate the performance of classifier) have been applied to train the classifier and we get an accuracy of 97.27%. The impacts of different components used in convolutional neural network were tested with this data, and subsequently, the best results were selected to build the classifier used in this article. The NinaPro database 5 (one of the biggest surface electromyography data sets) was also used to evaluate our method, which comprises of hand movement data of 10 intact subjects with two myo armbands as sensors, and the classification accuracy increased by 13.76% on average when using double myo armbands and increased by 18.92% on average when using single myo armband. In order to driving the robot hand (bionic manipulator), a group of continuous surface electromyography signals was recorded to train the classifier, and an accuracy of 91.72% was acquired. We also used the same method to collect a set of surface electromyography data from a disabled with hand lost, then classified it using the abovementioned network and achieved an accuracy of 89.37%. Finally, the classifier was deployed to the microcontroller to drive the bionic manipulator, and the full video URL is given in the conclusion, with both the healthy man and the disabled tested with the bionic manipulator. The abovementioned results suggest that this method will help to facilitate the development and application of surface electromyography neuroprosthesis.


Author(s):  
Angga Rahagiyanto

Indonesian: Indonesian SIBI has been widely reviewed by researchers using different types of cameras and sensors. The ultimate goal is to produce a strong, fast and accurate movement recognition process. One that supports talk of movement using sensors on the MYO Armband tool. This paper explains how to use raw data generated from the MYO Armband sensor and extract integration so that it can be used to facilitate complete hand, arm and combination movements in the SIBI sign language dictionary. MYO armband uses five sensors: accelerometer, gyroscope, orientation, euler-orientation and EMG. Each sensor produces data that is different in scale and size. This requires a process to make the data uniform. This study uses the min-max method to normalize any data on the MYO Armband sensor and the Moment Invariant method to extract the vector features of hand movements. Testing is done using sign language Movement statistics both dynamic signals. Testing is done using cross validation.


Sign in / Sign up

Export Citation Format

Share Document