scholarly journals Use of Commercial Off-The-Shelf Devices for the Detection of Manual Gestures in Surgery: Systematic Literature Review

10.2196/11925 ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. e11925 ◽  
Author(s):  
Fernando Alvarez-Lopez ◽  
Marcelo Fabián Maina ◽  
Francesc Saigí-Rubió

Background The increasingly pervasive presence of technology in the operating room raises the need to study the interaction between the surgeon and computer system. A new generation of tools known as commercial off-the-shelf (COTS) devices enabling touchless gesture–based human-computer interaction is currently being explored as a solution in surgical environments. Objective The aim of this systematic literature review was to provide an account of the state of the art of COTS devices in the detection of manual gestures in surgery and to identify their use as a simulation tool for motor skills teaching in minimally invasive surgery (MIS). Methods For this systematic literature review, a search was conducted in PubMed, Excerpta Medica dataBASE, ScienceDirect, Espacenet, OpenGrey, and the Institute of Electrical and Electronics Engineers databases. Articles published between January 2000 and December 2017 on the use of COTS devices for gesture detection in surgical environments and in simulation for surgical skills learning in MIS were evaluated and selected. Results A total of 3180 studies were identified, 86 of which met the search selection criteria. Microsoft Kinect (Microsoft Corp) and the Leap Motion Controller (Leap Motion Inc) were the most widely used COTS devices. The most common intervention was image manipulation in surgical and interventional radiology environments, followed by interaction with virtual reality environments for educational or interventional purposes. The possibility of using this technology to develop portable low-cost simulators for skills learning in MIS was also examined. As most of the articles identified in this systematic review were proof-of-concept or prototype user testing and feasibility testing studies, we concluded that the field was still in the exploratory phase in areas requiring touchless manipulation within environments and settings that must adhere to asepsis and antisepsis protocols, such as angiography suites and operating rooms. Conclusions COTS devices applied to hand and instrument gesture–based interfaces in the field of simulation for skills learning and training in MIS could open up a promising field to achieve ubiquitous training and presurgical warm up.

2018 ◽  
Author(s):  
Fernando Alvarez-Lopez ◽  
Marcelo Fabián Maina ◽  
Francesc Saigí-Rubió

BACKGROUND The increasingly pervasive presence of technology in the operating room (OR) raises the need to study the interaction between the surgeon and the computer system. A new generation of tools known as commercial off-the-shelf (COTS) devices that enable non-contact gesture-based human-computer interaction (HCI) are currently being explored as a solution in surgical environments. OBJECTIVE The aim of this systematic review was to provide an account of the state-of-the-art of COTS devices in the detection of manual gestures in surgery, and to identify their use as a simulation tool for teaching motor skills in minimally invasive surgery (MIS). METHODS A systematic literature review was conducted in PubMed, Embase, ScienceDirect and IEEE for articles published between January 2000 and 2016 on the use of COTS devices for gesture detection in surgical environments, and in simulation for surgical skills learning in MIS RESULTS A total of 2709 studies were identified, 76 of which met the search selection criteria. The Microsoft KinectTM and the Leap Motion ControllerTM were the most widely used COTS devices. The most common intervention was image manipulation in surgical and interventional radiology environments, followed by interaction with virtual reality environments for educational or interventional purposes; the possibility of using this technology to develop portable, low-cost simulators for skills learning in MIS was also examined. Given that the vast majority of articles found in this systematic review were proof-of-concept or prototype user and feasibility testing, we can conclude that this is a field that is still in the exploration phase in areas that require touchless manipulation in environments and settings that must adhere to asepsis and antisepsis protocols, such as angiography suites and operating rooms. CONCLUSIONS COTS devices applied to hand and instrument GBIs in the field of simulation for skills learning and training in MIS could open up a promising field to achieve the ubiquitous training and pre-surgical warm-up.


Author(s):  
D. Pagliaria ◽  
L. Pinto ◽  
M. Reguzzoni ◽  
L. Rossi

Since its launch on the market, Microsoft Kinect sensor has represented a great revolution in the field of low cost navigation, especially for indoor robotic applications. In fact, this system is endowed with a depth camera, as well as a visual RGB camera, at a cost of about 200$. The characteristics and the potentiality of the Kinect sensor have been widely studied for indoor applications. The second generation of this sensor has been announced to be capable of acquiring data even outdoors, under direct sunlight. The task of navigating passing from an indoor to an outdoor environment (and vice versa) is very demanding because the sensors that work properly in one environment are typically unsuitable in the other one. In this sense the Kinect could represent an interesting device allowing bridging the navigation solution between outdoor and indoor. In this work the accuracy and the field of application of the new generation of Kinect sensor have been tested outdoor, considering different lighting conditions and the reflective properties of the emitted ray on different materials. Moreover, an integrated system with a low cost GNSS receiver has been studied, with the aim of taking advantage of the GNSS positioning when the satellite visibility conditions are good enough. A kinematic test has been performed outdoor by using a Kinect sensor and a GNSS receiver and it is here presented.


10.52278/2415 ◽  
2020 ◽  
Author(s):  
Diego Gabriel Alonso

En los últimos años, en combinación con los avances tecnológicos han surgido nuevos paradigmas de interacción con el usuario. Esto ha motivado a la industria a la creación de dispositivos de Interfaz Natural de Usuario (NUI, del inglés Natural User Interface) cada vez más potentes y accesibles. En particular, las cámaras de profundidad han alcanzado grandes niveles de adopción por parte de los usuarios. Entre estos dispositivos se destacan la Microsoft Kinect, la Intel RealSense y el Leap Motion Controller. Este tipo de dispositivos facilitan la adquisición de datos en el Reconocimiento de Actividades Humanas (HAR, del inglés Human Activity Recognition). HAR es un área que tiene por objetivo la identificación automática, dentro de secuencias de imágenes, de actividades realizadas por seres humanos. Entre los diferentes tipos de actividades humanas se encuentran los gestos manuales, es decir, aquellos realizados con las manos. Los gestos manuales pueden ser estáticos o dinámicos, según si presentan movimiento en las secuencias de imágenes. El reconocimiento de gestos manuales permite a los desarrolladores de sistemas de Interacción Humano-Computadora (HCI, del inglés Human-Computer Interaction) crear experiencias e interacciones más inmersivas, naturales e intuitivas. Sin embargo, esta tarea no resulta sencilla. Es por ello que, en la academia se ha abordado esta problemática con el uso de técnicas de aprendizaje de máquina. Tras el análisis del estado del arte actual, se ha identificado que la gran mayoría de los enfoques propuestos no contemplan el reconocimiento de los gestos estáticos y los dinámicos en forma simultánea (enfoques híbridos). Es decir, los enfoques están destinados a reconocer un solo tipo de gestos. Además, dado el contexto de sistemas HCI reales debe tenerse en cuenta también el costo computacional y el consumo de recursos de estos enfoques, con lo cual los enfoques deberían ser livianos. Por otra parte, casi la totalidad de los enfoques presentes en el estado del arte abordan la problemática ubicando las cámaras frente a los usuarios (perspectiva de segunda persona) y no desde la perspectiva de primera persona (FPV, del inglés First-Person View), en la que el usuario posee un dispositivo colocado sobre sí mismo. Esto puede asociarse con que recién en los últimos años han surgido dispositivos relativamente ergonómicos (pequeños, de peso ligero) que permitan considerar una perspectiva FPV viable. En este contexto, en la presente tesis se propone un enfoque liviano para el reconocimiento de gestos híbridos con cámaras de profundidad teniendo en cuenta la perspectiva FPV. El enfoque propuesto consta de 3 grandes componentes. En primer lugar, el de Adquisición de Datos, en el cual se define el dispositivo a utilizar y se recopilan las imágenes y la información de profundidad que es normalizada al rango de valores de 0 a 255 (escala de los canales RGB). En segundo lugar, el de Preprocesamiento, el cual tiene por objetivo hacer que dos secuencias de imágenes con variaciones temporales sean comparables. Para ello, se aplican técnicas de remuestreo y reducción de resolución. Además, en este componente se computa el flujo óptico determinado por las secuencias de imágenes a color que se poseen. En particular, se utiliza el flujo óptico como un nuevo canal de información dadas sus ventajas en lo que respecta a un análisis espacio-temporal de los videos. En tercer lugar, con las secuencias muestreadas y con la información de flujo óptico, se procede al componente Modelo de Aprendizaje Profundo, donde se aplican técnicas de aprendizaje profundo que permiten abordar las etapas de extracción de características y de clasificación. Particularmente, se propone una arquitectura de red convolucional densamente conectada con soporte multi-modal. Cabe destacar que, la fusión de las modalidades no es en etapa temprana ni tardía sino dentro del mismo modelo. De esta manera, se obtiene un modelo end-to-end que obtiene beneficios de los canales de información en forma separada y también conjunta. Los experimentos realizados han mostrado resultados muy alentadores (alcanzando un 90% de exactitud) indicando que la elección de este tipo de arquitecturas permite obtener una gran eficiencia de parámetros así como también de tiempos de predicción. Cabe resaltar que, las pruebas son realizadas sobre un conjunto de datos relevante del área. En base a ello, se analiza el desempeño de la presente propuesta en relación a diferentes escenarios como con variación de iluminación o movimiento de cámara, diferentes tipos de gestos, sensibilidad o sesgo por personas, entre otros.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1072 ◽  
Author(s):  
Tibor Guzsvinecz ◽  
Veronika Szucs ◽  
Cecilia Sik-Lanyi

As the need for sensors increases with the inception of virtual reality, augmented reality and mixed reality, the purpose of this paper is to evaluate the suitability of the two Kinect devices and the Leap Motion Controller. When evaluating the suitability, the authors’ focus was on the state of the art, device comparison, accuracy, precision, existing gesture recognition algorithms and on the price of the devices. The aim of this study is to give an insight whether these devices could substitute more expensive sensors in the industry or on the market. While in general the answer is yes, it is not as easy as it seems: There are significant differences between the devices, even between the two Kinects, such as different measurement ranges, error distributions on each axis and changing depth precision relative to distance.


Buildings ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 336
Author(s):  
Behnam Mobaraki ◽  
Fidel Lozano-Galant ◽  
Rocio Porras Soriano ◽  
Francisco Javier Castilla Pascual

In recent years, many scholars have dedicated their research to the development of low-cost sensors for monitoring of various parameters. Despite their high number of applications, the state of the art related to low-cost sensors in building monitoring has not been addressed. To fill this gap, this article presents a systematic review, following well-established methodology, to analyze the state of the art in two aspects of structural and indoor parameters of buildings, in the SCOPUS database. This analysis allows to illustrate the potential uses of low-cost sensors in the building sector and addresses the scholars the preferred communication protocols and the most common microcontrollers for installation of low-cost monitoring systems. In addition, special attention is paid to describe different areas of the two mentioned fields of building monitoring and the most crucial parameters to be monitored in buildings. Finally, the deficiencies in line with limited number of studies carried out in various fields of building monitoring are overviewed and a series of parameters that ought to be studied in the future are proposed.


2021 ◽  
Vol 17 (3) ◽  
pp. 1-25
Author(s):  
Nico Mexis ◽  
Nikolaos Athanasios Anagnostopoulos ◽  
Shuai Chen ◽  
Jan Bambach ◽  
Tolga Arul ◽  
...  

In recent years, a new generation of the Internet of Things (IoT 2.0) is emerging, based on artificial intelligence, the blockchain technology, machine learning, and the constant consolidation of pre-existing systems and subsystems into larger systems. In this work, we construct and examine a proof-of-concept prototype of such a system of systems, which consists of heterogeneous commercial off-the-shelf components, and utilises diverse communication protocols. We recognise the inherent need for lightweight security in this context, and address it by employing a low-cost state-of-the-art security solution. Our solution is based on a novel hardware and software co-engineering paradigm, utilising well-known software-based cryptographic algorithms, in order to maximise the security potential of the hardware security primitive (a Physical Unclonable Function) that is used as a security anchor. The performance of the proposed security solution is evaluated, proving its suitability even for real-time applications. Additionally, the Dolev-Yao attacker model is considered in order to assess the resilience of our solution towards attacks against the confidentiality, integrity, and availability of the examined system of systems. In this way, it is confirmed that the proposed solution is able to address the emerging security challenges of the oncoming era of systems of systems.


2021 ◽  
pp. 1-9
Author(s):  
Ana de los Reyes-Guzmán ◽  
Vicente Lozano-Berrio ◽  
María Alvarez-Rodríguez ◽  
Elisa López-Dolado ◽  
Silvia Ceruelo-Abajo ◽  
...  

BACKGROUND: There is a growing interest in the use of technology in the field of neurorehabilitation in order to quantify and generate knowledge about sensorimotor disorders after neurological diseases, understanding that the technology has a high potential for its use as therapeutic tools. Taking into account that the rehabilitative process of motor disorders should extend beyond the inpatient condition, it’s necessary to involve low-cost technology, in order to have technological solutions that can approach the outpatient period at home. OBJECTIVE: to present the virtual applications-based RehabHand prototype for the rehabilitation of manipulative skills of the upper limbs in patients with neurological conditions and to determine the target population with respect to spinal cord injured patients. METHODS: Seven virtual reality applications have been designed and developed with a therapeutic sense, manipulated by means of Leap Motion Controller. The target population was determined from a sample of 40 people, healthy and patients, analyzing hand movements and gestures. RESULTS: The hand movements and gestures were estimated with a fitting rate between the range 0.607–0.953, determining the target population by cervical levels and upper extremity motor score. CONCLUSIONS: Leap Motion is suitable for a determined sample of cervical patients with a rehabilitation purpose.


Author(s):  
Joshua Q. Coburn ◽  
Ian Freeman ◽  
John L. Salmon

In the past few years, there have been some significant advances in consumer virtual reality (VR) devices. Devices such as the Oculus Rift, HTC Vive, Leap Motion™ Controller, and Microsoft Kinect® are bringing immersive VR experiences into the homes of consumers with much lower cost and space requirements than previous generations of VR hardware. These new devices are also lowering the barrier to entry for VR engineering applications. Past research has suggested that there are significant opportunities for using VR during design tasks to improve results and reduce development time. This work reviews the latest generation of VR hardware and reviews research studying VR in the design process. Additionally, this work extracts the major themes from the reviews and discusses how the latest technology and research may affect the engineering design process. We conclude that these new devices have the potential to significantly improve portions of the design process.


Sign in / Sign up

Export Citation Format

Share Document