scholarly journals Deep Learning Based BCI Control of a Robotic Service Assistant Using Intelligent Goal Formulation

2018 ◽  
Author(s):  
D. Kuhner ◽  
L.D.J. Fiederer ◽  
J. Aldinger ◽  
F. Burget ◽  
M. Völker ◽  
...  

AbstractAs autonomous service robots become more affordable and thus available for the general public, there is a growing need for user-friendly interfaces to control these systems. Control interfaces typically get more complicated with increasing complexity of the robotic tasks and the environment. Traditional control modalities as touch, speech or gesture commands are not necessarily suited for all users. While non-expert users can make the effort to familiarize themselves with a robotic system, paralyzed users may not be capable of controlling such systems even though they need robotic assistance most. In this paper, we present a novel framework, that allows these users to interact with a robotic service assistant in a closed-loop fashion, using only thoughts. The system is composed of several interacting components: non-invasive neuronal signal recording and co-adaptive deep learning which form the brain-computer interface (BCI), high-level task planning based on referring expressions, navigation and manipulation planning as well as environmental perception. We extensively evaluate the BCI in various tasks, determine the performance of the goal formulation user interface and investigate its intuitiveness in a user study. Furthermore, we demonstrate the applicability and robustness of the system in real world scenarios, considering fetch-and-carry tasks and tasks involving human-robot interaction. As our results show, the system is capable of adapting to frequent changes in the environment and reliably accomplishes given tasks within a reasonable amount of time. Combined with high-level planning using referring expressions and autonomous robotic systems, interesting new perspectives open up for non-invasive BCI-based human-robot interactions.

IoT ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 494-505
Author(s):  
Radu-Casian Mihailescu ◽  
Georgios Kyriakou ◽  
Angelos Papangelis

In this paper we address the problem of automatic sensor composition for servicing human-interpretable high-level tasks. To this end, we introduce multi-level distributed intelligent virtual sensors (multi-level DIVS) as an overlay framework for a given mesh of physical and/or virtual sensors already deployed in the environment. The goal for multi-level DIVS is two-fold: (i) to provide a convenient way for the user to specify high-level sensing tasks; (ii) to construct the computational graph that provides the correct output given a specific sensing task. For (i) we resort to a conversational user interface, which is an intuitive and user-friendly manner in which the user can express the sensing problem, i.e., natural language queries, while for (ii) we propose a deep learning approach that establishes the correspondence between the natural language queries and their virtual sensor representation. Finally, we evaluate and demonstrate the feasibility of our approach in the context of a smart city setup.


2019 ◽  
Vol 128 (5) ◽  
pp. 1286-1310 ◽  
Author(s):  
Oscar Mendez ◽  
Simon Hadfield ◽  
Nicolas Pugeault ◽  
Richard Bowden

Abstract The use of human-level semantic information to aid robotic tasks has recently become an important area for both Computer Vision and Robotics. This has been enabled by advances in Deep Learning that allow consistent and robust semantic understanding. Leveraging this semantic vision of the world has allowed human-level understanding to naturally emerge from many different approaches. Particularly, the use of semantic information to aid in localisation and reconstruction has been at the forefront of both fields. Like robots, humans also require the ability to localise within a structure. To aid this, humans have designed high-level semantic maps of our structures called floorplans. We are extremely good at localising in them, even with limited access to the depth information used by robots. This is because we focus on the distribution of semantic elements, rather than geometric ones. Evidence of this is that humans are normally able to localise in a floorplan that has not been scaled properly. In order to grant this ability to robots, it is necessary to use localisation approaches that leverage the same semantic information humans use. In this paper, we present a novel method for semantically enabled global localisation. Our approach relies on the semantic labels present in the floorplan. Deep Learning is leveraged to extract semantic labels from RGB images, which are compared to the floorplan for localisation. While our approach is able to use range measurements if available, we demonstrate that they are unnecessary as we can achieve results comparable to state-of-the-art without them.


2009 ◽  
Vol 10 (3) ◽  
pp. 392-426 ◽  
Author(s):  
Thomas Wisspeintner ◽  
Tijn van der Zant ◽  
Luca Iocchi ◽  
Stefan Schiffer

Being part of the RoboCup initiative, the RoboCup@Home league targets the development and deployment of autonomous service and assistive robot technology being essential for future personal domestic applications. The domain of domestic service and assistive robotics implicates a wide range of possible problems. The primary reasons for this include the large amount of uncertainty in the dynamic and non-standardized environments of the real world, and the related human interaction. Furthermore, the application orientation requires a large effort towards high level integration combined with a demand for general robustness of the systems. This article details the need for interdisciplinary community effort to iteratively identify related problems, to define benchmarks, to test and, finally, to solve the problems. The concepts and the implementation of the RoboCup@Home initiative as a combination of scientific exchange and competition is presented as an effi cient method to accelerate and focus technological and scientific progress in the domain of domestic service robots. Finally, the progress in terms of performance increase in the benchmarks and technological advancements is evaluated and discussed. Keywords: Domestic Service Robotics, Application, Uncertainty, Benchmark, Competition, Human–Robot Interaction, RoboCup@Home


2020 ◽  
pp. 49-52
Author(s):  
Trine Aabo Andersen

A new fast measuring method for process optimization of sucrose crystallization using image analysis based on high quality images and algorithms is introduced. With the mobile, non-invasive at-line system all steps of the sucrose crystallization can be measured to determine the crystal size distribution. The image analysis system is easy to operate and is as well an efficient laboratory solution with user-friendly and customized software. In comparison to sieve analysis, image analyses performed with the ParticleTech Solution have been proven to be reliable.


Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2258
Author(s):  
Madhab Raj Joshi ◽  
Lewis Nkenyereye ◽  
Gyanendra Prasad Joshi ◽  
S. M. Riazul Islam ◽  
Mohammad Abdullah-Al-Wadud ◽  
...  

Enhancement of Cultural Heritage such as historical images is very crucial to safeguard the diversity of cultures. Automated colorization of black and white images has been subject to extensive research through computer vision and machine learning techniques. Our research addresses the problem of generating a plausible colored photograph of ancient, historically black, and white images of Nepal using deep learning techniques without direct human intervention. Motivated by the recent success of deep learning techniques in image processing, a feed-forward, deep Convolutional Neural Network (CNN) in combination with Inception- ResnetV2 is being trained by sets of sample images using back-propagation to recognize the pattern in RGB and grayscale values. The trained neural network is then used to predict two a* and b* chroma channels given grayscale, L channel of test images. CNN vividly colorizes images with the help of the fusion layer accounting for local features as well as global features. Two objective functions, namely, Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR), are employed for objective quality assessment between the estimated color image and its ground truth. The model is trained on the dataset created by ourselves with 1.2 K historical images comprised of old and ancient photographs of Nepal, each having 256 × 256 resolution. The loss i.e., MSE, PSNR, and accuracy of the model are found to be 6.08%, 34.65 dB, and 75.23%, respectively. Other than presenting the training results, the public acceptance or subjective validation of the generated images is assessed by means of a user study where the model shows 41.71% of naturalness while evaluating colorization results.


2021 ◽  
Vol 53 (2) ◽  
Author(s):  
Sen Yang ◽  
Yaping Zhang ◽  
Siu-Yeung Cho ◽  
Ricardo Correia ◽  
Stephen P. Morgan

AbstractConventional blood pressure (BP) measurement methods have different drawbacks such as being invasive, cuff-based or requiring manual operations. There is significant interest in the development of non-invasive, cuff-less and continual BP measurement based on physiological measurement. However, in these methods, extracting features from signals is challenging in the presence of noise or signal distortion. When using machine learning, errors in feature extraction result in errors in BP estimation, therefore, this study explores the use of raw signals as a direct input to a deep learning model. To enable comparison with the traditional machine learning models which use features from the photoplethysmogram and electrocardiogram, a hybrid deep learning model that utilises both raw signals and physical characteristics (age, height, weight and gender) is developed. This hybrid model performs best in terms of both diastolic BP (DBP) and systolic BP (SBP) with the mean absolute error being 3.23 ± 4.75 mmHg and 4.43 ± 6.09 mmHg respectively. DBP and SBP meet the Grade A and Grade B performance requirements of the British Hypertension Society respectively.


Author(s):  
Mark O Sullivan ◽  
Carl T Woods ◽  
James Vaughan ◽  
Keith Davids

As it is appreciated that learning is a non-linear process – implying that coaching methodologies in sport should be accommodative – it is reasonable to suggest that player development pathways should also account for this non-linearity. A constraints-led approach (CLA), predicated on the theory of ecological dynamics, has been suggested as a viable framework for capturing the non-linearity of learning, development and performance in sport. The CLA articulates how skills emerge through the interaction of different constraints (task-environment-performer). However, despite its well-established theoretical roots, there are challenges to implementing it in practice. Accordingly, to help practitioners navigate such challenges, this paper proposes a user-friendly framework that demonstrates the benefits of a CLA. Specifically, to conceptualize the non-linear and individualized nature of learning, and how it can inform player development, we apply Adolph’s notion of learning IN development to explain the fundamental ideas of a CLA. We then exemplify a learning IN development framework, based on a CLA, brought to life in a high-level youth football organization. We contend that this framework can provide a novel approach for presenting the key ideas of a CLA and its powerful pedagogic concepts to practitioners at all levels, informing coach education programs, player development frameworks and learning environment designs in sport.


Technologies ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 16
Author(s):  
Luca Maule ◽  
Alessandro Luchetti ◽  
Matteo Zanetti ◽  
Paolo Tomasin ◽  
Marco Pertile ◽  
...  

Any severe motor disability is a condition that limits the ability to interact with the environment, even the domestic one, caused by the loss of control over one’s mobility. This work presents RoboEYE, a power wheelchair designed to allow users to move easily and autonomously within their homes. To achieve this goal, an innovative, cost-effective and user-friendly control system was designed, in which a non-invasive eye tracker, a monitor, and a 3D camera represent some of the core elements. RoboEYE integrates functionalities from the mobile robotics field into a standard power wheelchair, with the main advantage of providing the user with two driving options and comfortable navigation. The most intuitive and direct modality foresees the continuous control of frontal and angular wheelchair velocities by gazing at different areas of the monitor. The second, semi-autonomous modality allows navigation toward a selected point in the environment by just pointing and activating the wished destination while the system autonomously plans and follows the trajectory that brings the wheelchair to that point. The purpose of this work was to develop the control structure and driving interface designs of the aforementioned driving modalities taking into account also uncertainties in gaze detection and other sources of uncertainty related to the components to ensure user safety. Furthermore, the driving modalities, in particular the semi-autonomous one, were modeled and qualified through numerical simulations and experimental verification by testing volunteers, who are regular users of standard electric wheelchairs, to verify the efficiency, reliability and safety of the proposed system for domestic use. RoboEYE resulted suitable for environments with narrow passages wider than 1 m, which is comparable with a standard domestic door and due to its properties with large commercialization potential.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4045
Author(s):  
Alessandro Sassu ◽  
Jose Francisco Saenz-Cogollo ◽  
Maurizio Agelli

Edge computing is the best approach for meeting the exponential demand and the real-time requirements of many video analytics applications. Since most of the recent advances regarding the extraction of information from images and video rely on computation heavy deep learning algorithms, there is a growing need for solutions that allow the deployment and use of new models on scalable and flexible edge architectures. In this work, we present Deep-Framework, a novel open source framework for developing edge-oriented real-time video analytics applications based on deep learning. Deep-Framework has a scalable multi-stream architecture based on Docker and abstracts away from the user the complexity of cluster configuration, orchestration of services, and GPU resources allocation. It provides Python interfaces for integrating deep learning models developed with the most popular frameworks and also provides high-level APIs based on standard HTTP and WebRTC interfaces for consuming the extracted video data on clients running on browsers or any other web-based platform.


Sign in / Sign up

Export Citation Format

Share Document