Integrated System of Mixed Virtual Reality Based on Data Glove CyberGlove II and Robotic Arm MechaTE Robot

2014 ◽  
Vol 611 ◽  
pp. 239-244 ◽  
Author(s):  
Juraj Kováč ◽  
František Ďurovský ◽  
Jozef Varga

Proposed paper describes development of CyberGlove II - MechaTE low-cost robotic hand interface intended for future use in virtual and mixed reality robot programming. The main goal is to explore possibilities and gain programing experience in controlling mechanical hands by means of data gloves and its interconnection to virtual reality modeling software. First part of paper describes recent progress in using virtual reality for purposes of intuitive robot programming; second part includes an overview of recent development of mechanical hands construction, as well as currently available data gloves. Last part provides details about CyberGlove – MechaTE interface and its potential for methods of intuitive robot programming in virtual or mixed reality environments.

Author(s):  
Stefan Bittmann

Virtual reality (VR) is the term used to describe representation and perception in a computer-generated, virtual environment. The term was coined by author Damien Broderick in his 1982 novel “The Judas Mandala". The term "Mixed Reality" describes the mixing of virtual reality with pure reality. The term "hyper-reality" is also used. Immersion plays a major role here. Immersion describes the embedding of the user in the virtual world. A virtual world is considered plausible if the interaction is logical in itself. This interactivity creates the illusion that what seems to be happening is actually happening. A common problem with VR is "motion sickness." To create a sense of immersion, special output devices are needed to display virtual worlds. Here, "head-mounted displays", CAVE and shutter glasses are mainly used. Input devices are needed for interaction: 3D mouse, data glove, flystick as well as the omnidirectional treadmill, with which walking in virtual space is controlled by real walking movements, play a role here.


2021 ◽  
Vol 3 (1) ◽  
pp. 6-7
Author(s):  
Kathryn MacCallum

Mixed reality (MR) provides new opportunities for creative and innovative learning. MR supports the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real-time (MacCallum & Jamieson, 2017). The MR continuum links both virtual and augmented reality, whereby virtual reality (VR) enables learners to be immersed within a completely virtual world, while augmented reality (AR) blend the real and the virtual world. MR embraces the spectrum between the real and the virtual; the mix of the virtual and real worlds may vary depending on the application. The integration of MR into education provides specific affordances which make it specifically unique in supporting learning (Parson & MacCallum, 2020; Bacca, Baldiris, Fabregat, Graf & Kinshuk, 2014). These affordance enable students to support unique opportunities to support learning and develop 21st-century learning capabilities (Schrier, 2006; Bower, Howe, McCredie, Robinson, & Grover, 2014).   In general, most integration of MR in the classroom tend to be focused on students being the consumers of these experiences. However by enabling student to create their own experiences enables a wider range of learning outcomes to be incorporated into the learning experience. By enabling student to be creators and designers of their own MR experiences provides a unique opportunity to integrate learning across the curriculum and supports the develop of computational thinking and stronger digital skills. The integration of student-created artefacts has particularly been shown to provide greater engagement and outcomes for all students (Ananiadou & Claro, 2009).   In the past, the development of student-created MR experiences has been difficult, especially due to the steep learning curve of technology adoption and the overall expense of acquiring the necessary tools to develop these experiences. The recent development of low-cost mobile and online MR tools and technologies have, however, provided new opportunities to provide a scaffolded approach to the development of student-driven artefacts that do not require significant technical ability (MacCallum & Jamieson, 2017). Due to these advances, students can now create their own MR digital experiences which can drive learning across the curriculum.   This presentation explores how teachers at two high schools in NZ have started to explore and integrate MR into their STEAM classes.  This presentation draws on the results of a Teaching and Learning Research Initiative (TLRI) project, investigating the experiences and reflections of a group of secondary teachers exploring the use and adoption of mixed reality (augmented and virtual reality) for cross-curricular teaching. The presentation will explore how these teachers have started to engage with MR to support the principles of student-created digital experiences integrated into STEAM domains.


2014 ◽  
Vol 568-570 ◽  
pp. 1834-1838
Author(s):  
Feng Jie Sun ◽  
He Chen ◽  
Hui Juan Liu

The three-dimensional virtual reality technology was introduced into the visualized substation. The virtual substation models of main electrical equipment and the whole scene were built by using 3dsMax modeling software. After the design of special events response routing and editting the corresponding script language, users can interact with virtual objects in the scene model, resulting in feelings and experiences on the ground. Compared with the real substation, the virtual substation has the advantages of low cost, interactive operation safety, and is of great significance to improve the technical level of the operators.


Brain-Computer Interface (BCI) is atechnology that enables a human to communicate with anexternal stratagem to achieve the desired result. This paperpresents a Motor Imagery (MI) – Electroencephalography(EEG) signal based robotic hand movements of lifting anddropping of an external robotic arm. The MI-EEG signalswere extracted using a 3-channel electrode system with theAD8232 amplifier. The electrodes were placed on threelocations, namely, C3, C4, and right mastoid. Signalprocessing methods namely, Butterworth filter and Sym-9Wavelet Packet Decomposition (WPD) were applied on theextracted EEG signals to de-noise the raw EEG signal.Statistical features like entropy, variance, standarddeviation, covariance, and spectral centroid were extractedfrom the de-noised signals. The statistical features werethen applied to train a Multi-Layer Perceptron (MLP) -Deep Neural Network (DNN) to classify the hand movementinto two classes; ‘No Hand Movement’ and ’HandMovement’. The resultant k-fold cross-validated accuracyachieved was 85.41% and other classification metrics, suchas precision, recall sensitivity, specificity, and F1 Score werealso calculated. The trained model was interfaced withArduino to move the robotic arm according to the classpredicted by the DNN model in a real-time environment.The proposed end to end low-cost deep learning frameworkprovides a substantial improvement in real-time BCI.


2021 ◽  
Author(s):  
Isabela Gonçalves Magalhães ◽  
Júlia Alves dos Santos ◽  
Pedro Vitor de Freitas Muzy Lopes ◽  
Gisa Márcia Dutra Valente ◽  
Laura Cremoneze Rangel da Silva ◽  
...  

The advance of digital modeling software, computers with greater processing capacity and the evolution of specific rendering software, contribute to the increased use of images that simulate a real environment, being a practice increasingly inserted in the professional exercise of Architecture and in the university. This practice is already observed in undergraduate scanswhere students seek to learn on their own the use of programs and plug-ins for rendering. This work aimed to elaborate a process aimed at teaching architecture design using modeling in SketchUp and application of low-cost immersive virtual reality simulation tools for analysis of model studies in design disciplines. The method used was the use of some renderers and free virtual reality tools on the Internet. As a result we had the first contact of the students with the immersive virtual reality tools and in turn broadening the perception of the details of the objects studied and spatial vision


Author(s):  
Lukas Gabriel Dias Gomes ◽  
Adriel Luiz Marques ◽  
Laura Ribeiro

Author(s):  
Irene Maria Gironacci

Recent advancements in extended reality (XR) immersive technologies provide new tools for the development of novel and promising applications for business. Specifically, extended reality training applications are becoming popular in business due to their advantages of low cost, risk-free, data-oriented training. Extended reality training is the digital simulation of lifelike scenarios for training purposes using technologies such as virtual reality, augmented reality, mixed reality. Many applications are already available to train employees to develop specific technical skills, from maintenance to construction. The purpose of this chapter is to review the emerging XR applications developed for management training. Specifically, this chapter will focus on the training of some key skills in management such as leadership, problem solving, emotional intelligence, communication, and team working.


2015 ◽  
Vol 77 (20) ◽  
Author(s):  
Muhammad Fahmi Miskon ◽  
Sameh Mohsen Omer Kanzal ◽  
Muhammad Herman Jamaluddin ◽  
Ahmad Zaki Shukor ◽  
Fariz Ali

Recently robots are widely used in a various field particularly in the industry. Despite this fact robot still requires an undeniable amount of knowledge from the operators or workers who deal with them. As a result, robots cannot be easily programmed if the operator or the worker is not experienced in robotics field. One of the programming methods that has been introduced to make programming task user friendly is lead-through robot programming. However, the existing lead-through programming methods still requires an amount of knowledge that is not available for most of the operators and workers. The main objective of this project is to design a lead through method for point to point robot programming using incremental encoder feedback, which can record, save and playback the robot motion while considering the accuracy and precision of the robot. To validate the method, experiments were conducted in this project, where an operator manually moves a two DOF (degree of freedom) robotic arm on a white board while the encoder feedback was recorded and later played back by the robot. Then both recorded and playback trajectories were compared and analyzed. The result shows that the played back accuracy is 96.17% for motor 1 and 97.86% for motor 2 with standard deviation of 0.9593 for motor 1 and 2.33583 for motor 2.


Author(s):  
Martha Kafuko ◽  
Ishwar Singh ◽  
Tom Wanyama

Automation systems are generally made upof three main subsystems, namely mechanical, electricaland software. The interactions among these componentsaffect the integrated system in terms of reliability, quality,scalability, and cost. Therefore, it is imperative that thethree components of automation systems are designedconcurrently through an integrated design paradigm.This leads to the need to teach integrated design conceptsto students in programs such as process automation,electrical and computer engineering, and mechanicalengineering. However, due to the time constraint, it isalmost impossible to run full integrated design classprojects. Therefore, instructors have to decide on theparts of the design process that their class projects haveto focus on, and the parts that have to be reviewed for thecompleteness of the integrated design process. In thispaper we present the design and implementation of amicrocontroller based, 3D printable, low cost robotic armsuitable for teaching integrated design. Moreover, thepaper presents how the robotic arm design is used in anintegrated design project of an Industrial Networks andControllers course. Since the focus of this course is theelectrical and software subsystems of the robotic arm,and we do not have enough time to do a full design,students review the design of the robotic arm presented inthis paper and use it to either 3D print the robotic arm orpurchase the mechanical subsystem of the robotic armthat meets the specification.


Sign in / Sign up

Export Citation Format

Share Document