scholarly journals Evaluation of Keypoint Descriptors for Flight Simulator Cockpit Elements: WrightBroS Database

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7687
Author(s):  
Karolina Nurzynska ◽  
Przemysław Skurowski ◽  
Magdalena Pawlyta ◽  
Krzysztof Cyran

The goal of the WrightBroS project is to design a system supporting the training of pilots in a flight simulator. The desired software should work on smart glasses supplementing the visual information with augmented reality data, displaying, for instance, additional training information or descriptions of visible devices in real time. Therefore, the rapid recognition of observed objects and their exact positioning is crucial for successful deployment. The keypoint descriptor approach is a natural framework that is used for this purpose. For this to be applied, the thorough examination of specific keypoint location methods and types of keypoint descriptors is required first, as these are essential factors that affect the overall accuracy of the approach. In the presented research, we prepared a dedicated database presenting 27 various devices of flight simulator. Then, we used it to compare existing state-of-the-art techniques and verify their applicability. We investigated the time necessary for the computation of a keypoint position, the time needed for the preparation of a descriptor, and the classification accuracy of the considered approaches. In total, we compared the outcomes of 12 keypoint location methods and 10 keypoint descriptors. The best scores recorded for our database were almost 96% for a combination of the ORB method for keypoint localization followed by the BRISK approach as a descriptor.

2020 ◽  
pp. 147592172097698
Author(s):  
Shaohan Wang ◽  
Sakib Ashraf Zargar ◽  
Fuh-Gwo Yuan

A two-stage knowledge-based deep learning algorithm is presented for enabling automated damage detection in real-time using the augmented reality smart glasses. The first stage of the algorithm entails the identification of damage prone zones within the region of interest. This requires domain knowledge about the damage as well as the structure being inspected. In the second stage, automated damage detection is performed independently within each of the identified zones starting with the one that is the most damage prone. For real-time visual inspection enhancement using the augmented reality smart glasses, this two-stage approach not only ensures computational feasibility and efficiency but also significantly improves the probability of detection when dealing with structures with complex geometric features. A pilot study is conducted using hands-free Epson BT-300 smart glasses during which two distinct tasks are performed: First, using a single deep learning model deployed on the augmented reality smart glasses, automatic detection and classification of corrosion/fatigue, which is the most common cause of failure in high-strength materials, is performed. Then, in order to highlight the efficacy of the proposed two-stage approach, the more challenging task of defect detection in a multi-joint bolted region is addressed. The pilot study is conducted without any artificial control of external conditions like acquisition angles, lighting, and so on. While automating the visual inspection process is not a new concept for large-scale structures, in most cases, assessment of the collected data is performed offline. The algorithms/techniques used therein cannot be implemented directly on computationally limited devices such as the hands-free augmented reality glasses which could then be used by inspectors in the field for real-time assistance. The proposed approach serves to overcome this bottleneck.


2019 ◽  
Author(s):  
Xiao-Su Hu ◽  
Thiago D. Nascimento ◽  
Mary C Bender ◽  
Theodore Hall ◽  
Sean Petty ◽  
...  

BACKGROUND For many years, clinicians have been seeking for objective pain assessment solutions via neuroimaging techniques, focusing on the brain to detect human pain. Unfortunately, most of those techniques are not applicable in the clinical environment or lack accuracy. OBJECTIVE This study aimed to test the feasibility of a mobile neuroimaging-based clinical augmented reality (AR) and artificial intelligence (AI) framework, CLARAi, for objective pain detection and also localization direct from the patient’s brain in real time. METHODS Clinical dental pain was triggered in 21 patients by hypersensitive tooth stimulation with 20 consecutive descending cold stimulations (32°C-0°C). We used a portable optical neuroimaging technology, functional near-infrared spectroscopy, to gauge their cortical activity during evoked acute clinical pain. The data were decoded using a neural network (NN)–based AI algorithm to classify hemodynamic response data into pain and no-pain brain states in real time. We tested the performance of several networks (NN with 7 layers, 6 layers, 5 layers, 3 layers, recurrent NN, and long short-term memory network) upon reorganized data features on pain diction and localization in a simulated real-time environment. In addition, we also tested the feasibility of transmitting the neuroimaging data to an AR device, HoloLens, in the same simulated environment, allowing visualization of the ongoing cortical activity on a 3-dimensional brain template virtually plotted on the patients’ head during clinical consult. RESULTS The artificial neutral network (3-layer NN) achieved an optimal classification accuracy at 80.37% (126,000/156,680) for pain and no pain discrimination, with positive likelihood ratio (PLR) at 2.35. We further explored a 3-class localization task of left/right side pain and no-pain states, and convolutional NN-6 (6-layer NN) achieved highest classification accuracy at 74.23% (1040/1401) with PLR at 2.02. CONCLUSIONS Additional studies are needed to optimize and validate our prototype CLARAi framework for other pains and neurologic disorders. However, we presented an innovative and feasible neuroimaging-based AR/AI concept that can potentially transform the human brain into an objective target to visualize and precisely measure and localize pain in real time where it is most needed: in the doctor’s office. INTERNATIONAL REGISTERED REPOR RR1-10.2196/13594


Author(s):  
Anjali Daisy

Augmented reality (AR) refers to the layering of visual information onto a live picture of your physical surroundings, enhancing the real-world environment in real-time. Both Snapchat and Instagram filters are current examples of augmented reality. Since this technology has shown its ability to catch users, more and more brands are using it to engage current and potential customers. In an environment where almost everyone has a Smartphone, augmented reality seems like an obvious next step since there is no need for the additional hardware. It is generally quite straightforward for people to use, and has a great capacity to enhance the effects of marketing.


2020 ◽  
Vol 12 (3) ◽  
pp. 464
Author(s):  
Shuang Liu ◽  
Mei Li ◽  
Zhong Zhang ◽  
Baihua Xiao ◽  
Tariq S. Durrani

In recent times, deep neural networks have drawn much attention in ground-based cloud recognition. Yet such kind of approaches simply center upon learning global features from visual information, which causes incomplete representations for ground-based clouds. In this paper, we propose a novel method named multi-evidence and multi-modal fusion network (MMFN) for ground-based cloud recognition, which could learn extended cloud information by fusing heterogeneous features in a unified framework. Namely, MMFN exploits multiple pieces of evidence, i.e., global and local visual features, from ground-based cloud images using the main network and the attentive network. In the attentive network, local visual features are extracted from attentive maps which are obtained by refining salient patterns from convolutional activation maps. Meanwhile, the multi-modal network in MMFN learns multi-modal features for ground-based cloud. To fully fuse the multi-modal and multi-evidence visual features, we design two fusion layers in MMFN to incorporate multi-modal features with global and local visual features, respectively. Furthermore, we release the first multi-modal ground-based cloud dataset named MGCD which not only contains the ground-based cloud images but also contains the multi-modal information corresponding to each cloud image. The MMFN is evaluated on MGCD and achieves a classification accuracy of 88.63% comparative to the state-of-the-art methods, which validates its effectiveness for ground-based cloud recognition.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 687 ◽  
Author(s):  
Maxime Ferrera ◽  
Julien Moras ◽  
Pauline Trouvé-Peloux ◽  
Vincent Creuze

In the context of underwater robotics, the visual degradation induced by the medium properties make difficult the exclusive use of cameras for localization purpose. Hence, many underwater localization methods are based on expensive navigation sensors associated with acoustic positioning. On the other hand, pure visual localization methods have shown great potential in underwater localization but the challenging conditions, such as the presence of turbidity and dynamism, remain complex to tackle. In this paper, we propose a new visual odometry method designed to be robust to these visual perturbations. The proposed algorithm has been assessed on both simulated and real underwater datasets and outperforms state-of-the-art terrestrial visual SLAM methods under many of the most challenging conditions. The main application of this work is the localization of Remotely Operated Vehicles used for underwater archaeological missions, but the developed system can be used in any other applications as long as visual information is available.


2010 ◽  
Vol 20 (1) ◽  
pp. 9-13 ◽  
Author(s):  
Glenn Tellis ◽  
Lori Cimino ◽  
Jennifer Alberti

Abstract The purpose of this article is to provide clinical supervisors with information pertaining to state-of-the-art clinic observation technology. We use a novel video-capture technology, the Landro Play Analyzer, to supervise clinical sessions as well as to train students to improve their clinical skills. We can observe four clinical sessions simultaneously from a central observation center. In addition, speech samples can be analyzed in real-time; saved on a CD, DVD, or flash/jump drive; viewed in slow motion; paused; and analyzed with Microsoft Excel. Procedures for applying the technology for clinical training and supervision will be discussed.


2015 ◽  
Vol 6 (2) ◽  
Author(s):  
Rujianto Eko Saputro ◽  
Dhanar Intan Surya Saputra
Keyword(s):  

Media pembelajaran ternyata selalu mengikuti perkembangan teknologi yangada, mulai dari teknologi cetak, audio visual, komputer sampai teknologi gabunganantara teknologi cetak dengan komputer. Saat ini media pembelajaran hasil gabunganteknologi cetak dan komputer dapat diwujudkan dengan media teknologi AugmentedReality (AR). Augmented Reality (AR) adalah teknologi yang digunakan untukmerealisasikan dunia virtual ke dalam dunia nyata secara real-time. Organ pencernaanmanusia terdiri atas Mulut, Kerongkongan atau esofagus, Lambung, Usus halus, danUsus besar. Media pembelajaran mengenal organ pencernaan manusia pada saat inisangat monoton, yaitu melalui gambar, buku atau bahkan alat proyeksi lainnya.Menggunakan Augmented Reality yang mampu merealisasikan dunia virtual ke dunianyata, dapat mengubah objek-objek tersebut menjadi objek 3D, sehingga metodepembelajaran tidaklah monoton dan anak-anak jadi terpacu untuk mengetahuinya lebihlanjut, seperti mengetahui nama organ dan keterangan dari masing-masing organtersebut.


Sign in / Sign up

Export Citation Format

Share Document