scholarly journals CONTROL OF A DRONE WITH BODY GESTURES

2021 ◽  
Vol 1 ◽  
pp. 761-770
Author(s):  
Nicolas Gio ◽  
Ross Brisco ◽  
Tijana Vuletic

AbstractDrones are becoming more popular within military applications and civil aviation by hobbyists and business. Achieving a natural Human-Drone Interaction (HDI) would enable unskilled drone pilots to take part in the flying of these devices and more generally easy the use of drones. The research within this paper focuses on the design and development of a Natural User Interface (NUI) allowing a user to pilot a drone with body gestures. A Microsoft Kinect was used to capture the user’s body information which was processed by a motion recognition algorithm and converted into commands for the drone. The implementation of a Graphical User Interface (GUI) gives feedback to the user. Visual feedback from the drone’s onboard camera is provided on a screen and an interactive menu controlled by body gestures and allowing the choice of functionalities such as photo and video capture or take-off and landing has been implemented. This research resulted in an efficient and functional system, more instinctive, natural, immersive and fun than piloting using a physical controller, including innovative aspects such as the implementation of additional functionalities to the drone's piloting and control of the flight speed.

Author(s):  
Ghulam Mustafa ◽  
Muhammad Tahir Qadri ◽  
Umar Daraz

Remotely controlled microscopic slide was designed using especial Graphical User Interface (GUI) which interfaces the user at remote location with the real microscope using site and the user can easily view and control the slide present on the microscope’s stage. Precise motors have been used to allow the movement in all the three dimensions required by a pathologist. The pathologist can easily access these slides from any remote location and so the physical presence of the pathologist is now made easy. This invention would increase the health care efficiency by reducing the time and cost of diagnosis, making it very easy to get the expert’s opinion and supporting the pathologist to relocate himself for his work. The microscope is controlled with computer with an attractive Graphical User Interface (GUI), through which a pathologist can easily monitor, control and record the image of a slide. The pathologist can now do his work regardless of his location, time, cost and physically presence of lab equipment. The technology will help the specialist inviewing the patients slide from any location in the world. He would be able to monitor and control the stage. This will also help the pathological laboratories in getting opinion from senior pathologist who are present at any far location in the world. This system also reduces the life risks of the patients.


10.52278/2415 ◽  
2020 ◽  
Author(s):  
Diego Gabriel Alonso

En los últimos años, en combinación con los avances tecnológicos han surgido nuevos paradigmas de interacción con el usuario. Esto ha motivado a la industria a la creación de dispositivos de Interfaz Natural de Usuario (NUI, del inglés Natural User Interface) cada vez más potentes y accesibles. En particular, las cámaras de profundidad han alcanzado grandes niveles de adopción por parte de los usuarios. Entre estos dispositivos se destacan la Microsoft Kinect, la Intel RealSense y el Leap Motion Controller. Este tipo de dispositivos facilitan la adquisición de datos en el Reconocimiento de Actividades Humanas (HAR, del inglés Human Activity Recognition). HAR es un área que tiene por objetivo la identificación automática, dentro de secuencias de imágenes, de actividades realizadas por seres humanos. Entre los diferentes tipos de actividades humanas se encuentran los gestos manuales, es decir, aquellos realizados con las manos. Los gestos manuales pueden ser estáticos o dinámicos, según si presentan movimiento en las secuencias de imágenes. El reconocimiento de gestos manuales permite a los desarrolladores de sistemas de Interacción Humano-Computadora (HCI, del inglés Human-Computer Interaction) crear experiencias e interacciones más inmersivas, naturales e intuitivas. Sin embargo, esta tarea no resulta sencilla. Es por ello que, en la academia se ha abordado esta problemática con el uso de técnicas de aprendizaje de máquina. Tras el análisis del estado del arte actual, se ha identificado que la gran mayoría de los enfoques propuestos no contemplan el reconocimiento de los gestos estáticos y los dinámicos en forma simultánea (enfoques híbridos). Es decir, los enfoques están destinados a reconocer un solo tipo de gestos. Además, dado el contexto de sistemas HCI reales debe tenerse en cuenta también el costo computacional y el consumo de recursos de estos enfoques, con lo cual los enfoques deberían ser livianos. Por otra parte, casi la totalidad de los enfoques presentes en el estado del arte abordan la problemática ubicando las cámaras frente a los usuarios (perspectiva de segunda persona) y no desde la perspectiva de primera persona (FPV, del inglés First-Person View), en la que el usuario posee un dispositivo colocado sobre sí mismo. Esto puede asociarse con que recién en los últimos años han surgido dispositivos relativamente ergonómicos (pequeños, de peso ligero) que permitan considerar una perspectiva FPV viable. En este contexto, en la presente tesis se propone un enfoque liviano para el reconocimiento de gestos híbridos con cámaras de profundidad teniendo en cuenta la perspectiva FPV. El enfoque propuesto consta de 3 grandes componentes. En primer lugar, el de Adquisición de Datos, en el cual se define el dispositivo a utilizar y se recopilan las imágenes y la información de profundidad que es normalizada al rango de valores de 0 a 255 (escala de los canales RGB). En segundo lugar, el de Preprocesamiento, el cual tiene por objetivo hacer que dos secuencias de imágenes con variaciones temporales sean comparables. Para ello, se aplican técnicas de remuestreo y reducción de resolución. Además, en este componente se computa el flujo óptico determinado por las secuencias de imágenes a color que se poseen. En particular, se utiliza el flujo óptico como un nuevo canal de información dadas sus ventajas en lo que respecta a un análisis espacio-temporal de los videos. En tercer lugar, con las secuencias muestreadas y con la información de flujo óptico, se procede al componente Modelo de Aprendizaje Profundo, donde se aplican técnicas de aprendizaje profundo que permiten abordar las etapas de extracción de características y de clasificación. Particularmente, se propone una arquitectura de red convolucional densamente conectada con soporte multi-modal. Cabe destacar que, la fusión de las modalidades no es en etapa temprana ni tardía sino dentro del mismo modelo. De esta manera, se obtiene un modelo end-to-end que obtiene beneficios de los canales de información en forma separada y también conjunta. Los experimentos realizados han mostrado resultados muy alentadores (alcanzando un 90% de exactitud) indicando que la elección de este tipo de arquitecturas permite obtener una gran eficiencia de parámetros así como también de tiempos de predicción. Cabe resaltar que, las pruebas son realizadas sobre un conjunto de datos relevante del área. En base a ello, se analiza el desempeño de la presente propuesta en relación a diferentes escenarios como con variación de iluminación o movimiento de cámara, diferentes tipos de gestos, sensibilidad o sesgo por personas, entre otros.


2015 ◽  
Author(s):  
Zeeshan Ahmed

Software design and its engineering is essential for bioinformatics software impact. We propose a new approach ‘Butterfly’, for the betterment of modeling of scientific software solutions by targeting key developmental points: intuitive, graphical user interface design, stable methodical implementation and comprehensive output presentation. The focus of research was to address following three key points: 1) differences and different challenges required to change from traditional to scientific software engineering, 2) scientific software solution development needs feedback and control loops following basic engineering principles for implementation and 3) software design with new approach which helps in developing and implementing a comprehensive scientific software solution. We validated the approach by comparing old and new bioinformatics software solutions. Moreover, we have successfully applied our approach in the design and engineering of different well applied and published Bioinformatics and Neuroinformatics tools including DroLIGHT, LS-MIDA, Isotopo, Ant-App-DB, GenomeVX and Lipid-Pro.


Author(s):  
Víctor PEREZ-GARCIA ◽  
Joel QUINTANILLA-DOMINGUEZ ◽  
Israel YAÑEZ-VARGAS ◽  
José AGUILERA-GONZALEZ

This paper describes the design and development of a Graphical User Interface through the virtual instrumentation software NI LabVIEW using the VISA function, to graphically visualize and storage the data of the climatological variables of temperature and relative humidity. The graphical interface offers the option to export the date, time and data of the two variables to text documents with extension “.txt”, which acquires the information of the electronic board wireless monitoring and control, which uses a main device PIC16F877A microcontroller. AMT1001 Precision Analog Sensor was used to sense temperature and relative humidity. The PIC16F877A was programmed using a C programming language in the CCS Compiler compiler, to the data acquisition, and send it via RS232 communication to the computer, using the PL2303 module USB to TTL converter. To check the GUI operation, the electronic wireless monitoring and control card was connected to the computer equipment by wire, however, the monitoring of the climate variables can be done wirelessly by XBEE technology. Future work aims to monitor the climate of a horticultural greenhouse with XBBE technology, so that the data is sent wirelessly to a computer that has the GUI, and is also connected to Ethernet or WIFI, which will have the LabVIEW graphical interface explained in this article, and the data will be displayed / analyzed through the internet.


2020 ◽  
Vol 52 (6) ◽  
pp. 2372-2382
Author(s):  
Jack E. Taylor ◽  
Alistair Beith ◽  
Sara C. Sereno

AbstractLexOPS is an R package and user interface designed to facilitate the generation of word stimuli for use in research. Notably, the tool permits the generation of suitably controlled word lists for any user-specified factorial design and can be adapted for use with any language. It features an intuitive graphical user interface, including the visualization of both the distributions within and relationships among variables of interest. An inbuilt database of English words is also provided, including a range of lexical variables commonly used in psycholinguistic research. This article introduces LexOPS, outlining the features of the package and detailing the sources of the inbuilt dataset. We also report a validation analysis, showing that, in comparison to stimuli of existing studies, stimuli optimized with LexOPS generally demonstrate greater constraint and consistency in variable manipulation and control. Current instructions for installing and using LexOPS are available at https://JackEdTaylor.github.io/LexOPSdocs/.


2019 ◽  
Vol 16 (8) ◽  
pp. 3384-3394
Author(s):  
Sathish Kumar Selvaperumal ◽  
Waleed Al-Gumaei ◽  
Raed Abdulla ◽  
Vinesh Thiruchelvam

This paper aims to design and develop a network infrastructure for a smart campus using the Internet of Things which can be used to control different devices and to update the management with real-time data. In this proposed system, NodeMCU ESP8266 is interfaced with thermal and motion sensor for human, humidity and temperature sensor for the room and relay to control the lights and the air-conditioned. MQTT broker is used to acquire the data and control to and from NodeMCU ESP8266, Raspberry pi and LoRa, to be interfaced wirelessly with the Node-Red. Thus, the system is controlled and monitored wirelessly with the help of the developed integrated Graphical User Interface along with the Mobile application. The performance of the developed proposed system is analyzed and evaluated by testing the motion detection in the classroom, the LoRa range with the RSSI, the average time taken by the system to respond, the average time taken for the Graphical User Interface to response and update its data. Finally, the average time taken by the system and the Graphical User Interface to respond to the lights and air-conditioned control systems is less than 1 s, and for the security and parking systems is less than 2 s.


2011 ◽  
Vol 10 (1) ◽  
pp. 45 ◽  
Author(s):  
Maged N Kamel Boulos ◽  
Bryan J Blanchard ◽  
Cory Walker ◽  
Julio Montero ◽  
Aalap Tripathy ◽  
...  

Author(s):  
Mauro De Bellis ◽  
Paul Phamduy ◽  
Maurizio Porfiri

Interactive control modes for robotic fish based informal science learning activities have been shown to increase user interest in STEM careers. This study explores the use of natural user interfaces to engage users in an interactive activity and excite them about the possibility of a robotics career. In this work, we propose a novel natural user interface platform for enhancing participant interaction by controlling a robotic fish in a set of tasks. Specifically, we develop and characterize a new platform, which utilizes a Microsoft Kinect and an ad-hoc communication protocol. Preliminary studies are conducted to assess the usability of the platform.


Sign in / Sign up

Export Citation Format

Share Document