scholarly journals User Behavior Adaptive AR Guidance for Wayfinding and Tasks Completion

2021 ◽  
Vol 5 (11) ◽  
pp. 65
Author(s):  
Camille Truong-Allié ◽  
Alexis Paljic ◽  
Alexis Roux ◽  
Martin Herbeth

Augmented reality (AR) is widely used to guide users when performing complex tasks, for example, in education or industry. Sometimes, these tasks are a succession of subtasks, possibly distant from each other. This can happen, for instance, in inspection operations, where AR devices can give instructions about subtasks to perform in several rooms. In this case, AR guidance is both needed to indicate where to head to perform the subtasks and to instruct the user about how to perform these subtasks. In this paper, we propose an approach based on user activity detection. An AR device displays the guidance for wayfinding when current user activity suggests it is needed. We designed the first prototype on a head-mounted display using a neural network for user activity detection and compared it with two other guidance temporality strategies, in terms of efficiency and user preferences. Our results show that the most efficient guidance temporality depends on user familiarity with the AR display. While our proposed guidance has not proven to be more efficient than the other two, our experiment hints toward several improvements of our prototype, which is a first step in the direction of efficient guidance for both wayfinding and complex task completion.

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Qian Gao ◽  
Pengcheng Ma

Due to the influence of context information on user behavior, context-aware recommendation system (CARS) has attracted extensive attention in recent years. The most advanced context-aware recommendation system maps the original multi-field features into a shared hidden space and then simply connects it to a deep neural network (DNN) or other specially designed networks. However, for different areas, the ability of modeling complex interactions in a sufficiently flexible and explicit way is limited by the simple unstructured combination of feature fields. Therefore, it is hard to get the accurate results of the user behavior prediction. In this paper, a graph structure is used to establish the interaction between context and users/items. Through modeling user behavior, we can explore user preferences in different context environments, so as to make personalized recommendations for users. In particular, we construct a context-user and context-item interactions graph separately. In the interactions graph, each node is composed of a user feature field, an item feature field, and a feature field of different contexts. Different feature fields can interact through edges. Therefore, the task of modeling feature interaction can be transformed into modeling the node interaction on the corresponding graph. To this end, an innovative model called context-aware graph neural network (CA-GNN) model is designed. Furthermore, in order to obtain more accurate and efficient recommendation results, first, we innovatively use the attention mechanism to improve the interpretability of CA-GNN; second, we innovatively use the degree of physical fatigue features which has never been used in traditional CARS as critical contextual feature information into our CA-GNN. We simulated the Food and Yelp datasets. The experimental results show that CA-GNN is better than other methods in terms of root mean square error (RMSE) and mean absolute error (MAE).


Animals ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 2034
Author(s):  
Jennifer J. Gonzalez ◽  
Abozar Nasirahmadi ◽  
Ute Knierim

In search for an early warning system for cannibalism, in this study a newly developed automatic pecking activity detection system was validated and used to investigate how pecking activity changes over the rearing phase and before cannibalistic outbreaks. Data were recorded on two farms, one with female (intact beaks) and the other with male (trimmed beaks) turkeys. A metallic pecking object that was equipped with a microphone was installed in the barn and video monitored. Pecking activity was continuously recorded and fed into a CNN (Convolutional neural network) model that automatically detected pecks. The CNN was validated on both farms, and very satisfactory detection performances were reached (mean sensitivity/recall, specificity, accuracy, precision, and F1-score around 90% or higher). The extent of pecking at the object differed between farms, but the objects were used during the whole recording time, with highest activities in the morning hours. Daily pecking frequencies showed a low downward trend over the rearing period, although on both farms they increased again in week 5 of life. No clear associations between pecking frequencies and in total three cannibalistic outbreaks on farm 1 in one batch could be found. The detection system is usable for further research, but it should be further automated. It should also be further tested under various farm conditions.


Author(s):  
Eugene Hayden ◽  
Kang Wang ◽  
Chengjie Wu ◽  
Shi Cao

This study explores the design, implementation, and evaluation of an Augmented Reality (AR) prototype that assists novice operators in performing procedural tasks in simulator environments. The prototype uses an optical see-through head-mounted display (OST HMD) in conjunction with a simulator display to supplement sequences of interactive visual and attention-guiding cues to the operator’s field of view. We used a 2x2 within-subject design to test two conditions: with/without AR-cues, each condition had a voice assistant and two procedural tasks (preflight and landing). An experiment examined twenty-six novice operators. The results demonstrated that augmented reality had benefits in terms of improved situation awareness and accuracy, however, it yielded longer task completion time by creating a speed-accuracy trade-off effect in favour of accuracy. No significant effect on mental workload is found. The results suggest that augmented reality systems have the potential to be used by a wider audience of operators.


Author(s):  
Yunfei Fu ◽  
Hongchuan Yu ◽  
Chih-Kuo Yeh ◽  
Tong-Yee Lee ◽  
Jian J. Zhang

Brushstrokes are viewed as the artist’s “handwriting” in a painting. In many applications such as style learning and transfer, mimicking painting, and painting authentication, it is highly desired to quantitatively and accurately identify brushstroke characteristics from old masters’ pieces using computer programs. However, due to the nature of hundreds or thousands of intermingling brushstrokes in the painting, it still remains challenging. This article proposes an efficient algorithm for brush Stroke extraction based on a Deep neural network, i.e., DStroke. Compared to the state-of-the-art research, the main merit of the proposed DStroke is to automatically and rapidly extract brushstrokes from a painting without manual annotation, while accurately approximating the real brushstrokes with high reliability. Herein, recovering the faithful soft transitions between brushstrokes is often ignored by the other methods. In fact, the details of brushstrokes in a master piece of painting (e.g., shapes, colors, texture, overlaps) are highly desired by artists since they hold promise to enhance and extend the artists’ powers, just like microscopes extend biologists’ powers. To demonstrate the high efficiency of the proposed DStroke, we perform it on a set of real scans of paintings and a set of synthetic paintings, respectively. Experiments show that the proposed DStroke is noticeably faster and more accurate at identifying and extracting brushstrokes, outperforming the other methods.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3068
Author(s):  
Soumaya Dghim ◽  
Carlos M. Travieso-González ◽  
Radim Burget

The use of image processing tools, machine learning, and deep learning approaches has become very useful and robust in recent years. This paper introduces the detection of the Nosema disease, which is considered to be one of the most economically significant diseases today. This work shows a solution for recognizing and identifying Nosema cells between the other existing objects in the microscopic image. Two main strategies are examined. The first strategy uses image processing tools to extract the most valuable information and features from the dataset of microscopic images. Then, machine learning methods are applied, such as a neural network (ANN) and support vector machine (SVM) for detecting and classifying the Nosema disease cells. The second strategy explores deep learning and transfers learning. Several approaches were examined, including a convolutional neural network (CNN) classifier and several methods of transfer learning (AlexNet, VGG-16 and VGG-19), which were fine-tuned and applied to the object sub-images in order to identify the Nosema images from the other object images. The best accuracy was reached by the VGG-16 pre-trained neural network with 96.25%.


2015 ◽  
Vol 764-765 ◽  
pp. 740-746
Author(s):  
Hang Yuan ◽  
Chen Lu ◽  
Ze Tao Xiong ◽  
Hong Mei Liu

Fault detection for aileron actuators mainly involves the enhancement of reliability and fault tolerant capability. Considering the complexity of the working conditions of aileron actuators, a fault detection method for an aileron actuator under variable conditions is proposed in this study. A bi-step neural network is utilized for fault detection. The first neural network, which is employed as the observer, is established to monitor the aileron actuator and generate the residual error. The other neural network generates the corresponding adaptive threshold synchronously. Faults are detected by comparing the residual error and the threshold. In considering of the variable conditions, aerodynamic loads are introduced to the bi-step neural network. The training order spectrums are designed. Finally, the effectiveness of the proposed scheme is demonstrated by a simulation model with different faults.


Author(s):  
Valerii Dmitrienko ◽  
Sergey Leonov ◽  
Mykola Mezentsev

The idea of ​​Belknap's four-valued logic is that modern computers should function normally not only with the true values ​​of the input information, but also under the conditions of inconsistency and incompleteness of true failures. Belknap's logic introduces four true values: T (true - true), F (false - false), N (none - nobody, nothing, none), B (both - the two, not only the one but also the other).  For ease of work with these true values, the following designations are introduced: (1, 0, n, b). Belknap's logic can be used to obtain estimates of proximity measures for discrete objects, for which the functions Jaccard and Needhem, Russel and Rao, Sokal and Michener, Hamming, etc. are used. In this case, it becomes possible to assess the proximity, recognition and classification of objects in conditions of uncertainty when the true values ​​are taken from the set (1, 0, n, b). Based on the architecture of the Hamming neural network, neural networks have been developed that allow calculating the distances between objects described using true values ​​(1, 0, n, b). Keywords: four-valued Belknap logic, Belknap computer, proximity assessment, recognition and classification, proximity function, neural network.


2020 ◽  
Author(s):  
Chiou-Jye Huang ◽  
Yamin Shen ◽  
Ping-Huan Kuo ◽  
Yung-Hsiang Chen

AbstractThe coronavirus disease 2019 pandemic continues as of March 26 and spread to Europe on approximately February 24. A report from April 29 revealed 1.26 million confirmed cases and 125 928 deaths in Europe. This study proposed a novel deep neural network framework, COVID-19Net, which parallelly combines a convolutional neural network (CNN) and bidirectional gated recurrent units (GRUs). Three European countries with severe outbreaks were studied—Germany, Italy, and Spain—to extract spatiotemporal feature and predict the number of confirmed cases. The prediction results acquired from COVID-19Net were compared to those obtained using a CNN, GRU, and CNN-GRU. The mean absolute error, mean absolute percentage error, and root mean square error, which are commonly used model assessment indices, were used to compare the accuracy of the models. The results verified that COVID-19Net was notably more accurate than the other models. The mean absolute percentage error generated by COVID-19Net was 1.447 for Germany, 1.801 for Italy, and 2.828 for Spain, which were considerably lower than those of the other models. This indicated that the proposed framework can accurately predict the accumulated number of confirmed cases in the three countries and serve as a crucial reference for devising public health strategies.


Sign in / Sign up

Export Citation Format

Share Document