Network Anomaly Analysis using the Microsoft HoloLens

Author(s):  
Steve Beitzel ◽  
Josiah Dykstra ◽  
Paul Toliver ◽  
Jason Youzwak

We investigate the feasibility of using Microsoft HoloLens, a mixed reality device, to visually analyze network capture data and locate anomalies. We developed MINER, a prototype application to visualize details from network packet captures as 3D stereogram charts. MINER employs a novel approach to time-series visualization that extends the time dimension across two axes, thereby taking advantage of the immersive 3D space available via the HoloLens. Users navigate the application through eye gaze and hand gestures to view summary and detailed bar graphs. Callouts display additional detail based on the user’s immediate gaze. In a user study, volunteers used MINER to locate network attacks in a dataset from the 2013 VAST Challenge. We compared the time and effort with a similar test using traditional tools on a desktop computer. Our findings suggest that network anomaly analysis with the HoloLens achieved comparable effectiveness, efficiency and satisfaction. We describe user metrics and feedback collected from these experiments; lessons learned and suggested future work.

2021 ◽  
Vol 11 (20) ◽  
pp. 9480
Author(s):  
Xinyi Tu ◽  
Juuso Autiosalo ◽  
Adnane Jadid ◽  
Kari Tammi ◽  
Gudrun Klinker

Digital twin technology empowers the digital transformation of the industrial world with an increasing amount of data, which meanwhile creates a challenging context for designing a human–machine interface (HMI) for operating machines. This work aims at creating an HMI for digital twin based services. With an industrial crane platform as a case study, we presented a mixed reality (MR) application running on a Microsoft HoloLens 1 device. The application, consisting of visualization, interaction, communication, and registration modules, allowed crane operators to both monitor the crane status and control its movement through interactive holograms and bi-directional data communication, with enhanced mobility thanks to spatial registration and tracking of the MR environment. The prototype was quantitatively evaluated regarding the control accuracy in 20 measurements following a step-by-step protocol that we defined to standardize the measurement procedure. The results suggested that the differences between the target and actual positions were within the 10 cm range in three dimensions, which were considered sufficiently small regarding the typical crane operation use case of logistics purposes and could be improved with the adoption of robust registration and tracking techniques in our future work.


2021 ◽  
Vol 2 ◽  
Author(s):  
Adélaïde Genay ◽  
Anatole Lécuyer ◽  
Martin Hachet

This paper studies the sense of embodiment of virtual avatars in Mixed Reality (MR) environments visualized with an Optical See-Through display. We investigated whether the content of the surrounding environment could impact the user’s perception of their avatar, when embodied from a first-person perspective. To do so, we conducted a user study comparing the sense of embodiment toward virtual robot hands in three environment contexts which included progressive quantities of virtual content: real content only, mixed virtual/real content, and virtual content only. Taken together, our results suggest that users tend to accept virtual hands as their own more easily when the environment contains both virtual and real objects (mixed context), allowing them to better merge the two “worlds”. We discuss these results and raise research questions for future work to consider.


i-com ◽  
2021 ◽  
Vol 20 (1) ◽  
pp. 19-32
Author(s):  
Daniel Buschek ◽  
Charlotte Anlauff ◽  
Florian Lachner

Abstract This paper reflects on a case study of a user-centred concept development process for a Machine Learning (ML) based design tool, conducted at an industry partner. The resulting concept uses ML to match graphical user interface elements in sketches on paper to their digital counterparts to create consistent wireframes. A user study (N=20) with a working prototype shows that this concept is preferred by designers, compared to the previous manual procedure. Reflecting on our process and findings we discuss lessons learned for developing ML tools that respect practitioners’ needs and practices.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3673
Author(s):  
Stefan Grushko ◽  
Aleš Vysocký ◽  
Petr Oščádal ◽  
Michal Vocetka ◽  
Petr Novák ◽  
...  

In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, allowing the robot to adapt its motion to avoid collisions with human workers. Assuming the production task has a high degree of variability, the robot’s movements can be difficult to predict, leading to a feeling of anxiety in the worker when the robot changes its trajectory and approaches since the worker has no information about the planned movement of the robot. Additionally, without information about the robot’s movement, the human worker cannot effectively plan own activity without forcing the robot to constantly replan its movement. We propose a novel approach to communicating the robot’s intentions to a human worker. The improvement to the collaboration is presented by introducing haptic feedback devices, whose task is to notify the human worker about the currently planned robot’s trajectory and changes in its status. In order to verify the effectiveness of the developed human-machine interface in the conditions of a shared collaborative workspace, a user study was designed and conducted among 16 participants, whose objective was to accurately recognise the goal position of the robot during its movement. Data collected during the experiment included both objective and subjective parameters. Statistically significant results of the experiment indicated that all the participants could improve their task completion time by over 45% and generally were more subjectively satisfied when completing the task with equipped haptic feedback devices. The results also suggest the usefulness of the developed notification system since it improved users’ awareness about the motion plan of the robot.


Author(s):  
Jassim Happa ◽  
Ioannis Agrafiotis ◽  
Martin Helmhout ◽  
Thomas Bashford-Rogers ◽  
Michael Goldsmith ◽  
...  

In recent years, many tools have been developed to understand attacks that make use of visualization, but few examples aims to predict real-world consequences. We have developed a visualization tool that aims to improve decision support during attacks. Our tool visualizes propagation of risks from IDS and AV-alert data by relating sensor alerts to Business Process (BP) tasks and machine assets: an important capability gap present in many Security Operation Centres (SOCs) today. In this paper we present a user study in which we evaluate the tool's usability and ability to deliver situational awareness to the analyst. Ten analysts from seven SOCs performed carefully designed tasks related to understanding risks and prioritising recovery decisions. The study was conducted in laboratory conditions, with simulated attacks, and used a mixed-method approach to collect data from questionnaires, eyetracking and voice-recorded interviews. The findings suggest that providing analysts with situational awareness relating to business priorities can help them prioritise response strategies. Finally, we provide an in-depth discussion on the wider questions related to user studies in similar conditions as well as lessons learned from our user study and developing a visualization tool of this type.


2021 ◽  
Vol 1 ◽  
pp. 2107-2116
Author(s):  
Agnese Brunzini ◽  
Alessandra Papetti ◽  
Michele Germani ◽  
Erica Adrario

AbstractIn the medical education field, the use of highly sophisticated simulators and extended reality (XR) simulations allow training complex procedures and acquiring new knowledge and attitudes. XR is considered useful for the enhancement of healthcare education; however, several issues need further research.The main aim of this study is to define a comprehensive method to design and optimize every kind of simulator and simulation, integrating all the relevant elements concerning the scenario design and prototype development.A complete framework for the design of any kind of advanced clinical simulation is proposed and it has been applied to realize a mixed reality (MR) prototype for the simulation of the rachicentesis. The purpose of the MR application is to immerse the trainee in a more realistic environment and to put him/her under pressure during the simulation, as in real practice.The application was tested with two different devices: the headset Vox Gear Plus for smartphone and the Microsoft Hololens. Eighteen students of the 6th year of Medicine and Surgery Course were enrolled in the study. Results show the comparison of user experience related to the two different devices and simulation performance using the Hololens.


2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


10.5772/5694 ◽  
2007 ◽  
Vol 4 (2) ◽  
pp. 24 ◽  
Author(s):  
E. Colon ◽  
G. De Cubber ◽  
H. Ping ◽  
J-C Habumuremyi ◽  
H. Sahli ◽  
...  

This paper summarises the main results of 10 years of research and development in Humanitarian Demining. The Hudem project focuses on mine detection systems and aims at provided different solutions to support the mine detection operations. Robots using different kind of locomotion systems have been designed and tested on dummy minefields. In order to control these robots, software interfaces, control algorithms, visual positioning and terrain following systems have also been developed. Typical data acquisition results obtained during trial campaigns with robots and data acquisition systems are reported. Lessons learned during the project and future work conclude this paper.


Sign in / Sign up

Export Citation Format

Share Document