Maritime navigational assistance by visual augmentation

2021 ◽  
pp. 1-19
Author(s):  
Bruno G. Leite ◽  
Helio T. Sinohara ◽  
Newton Maruyama ◽  
Eduardo A. Tannuri

Abstract Several types of equipment have been developed over the years to assist ship operators with their tasks. Nowadays, navigational equipment typically provides an enormous volume of information. Thus, there is a corresponding need for efficiency in how such information is presented to ship operators. Augmented reality (AR) systems are being investigated for such efficient presentation of typical navigational information. The present work is particularly interested in an AR architecture commonly referred as monitor augmented reality (MAR). In this context, the development of MAR systems is briefly summarised. The projection of three-dimensional elements into a camera scene is presented. Potential visual assets are proposed and exemplified with videos from a ship manoeuvring simulator and a real experiment. Enhanced scenes combining pertinent virtual elements are shown exemplifying potential assistance applications. The authors mean to contribute to the popularisation of MAR systems in maritime environments. Further research is suggested to define optimal combinations of visual elements for alternative maritime navigation scenarios. Note that there are still many challenges for the deployment of MAR tools in typical maritime operations.

2021 ◽  
Vol 45 (5) ◽  
Author(s):  
Yuri Nagayo ◽  
Toki Saito ◽  
Hiroshi Oyama

AbstractThe surgical education environment has been changing significantly due to restricted work hours, limited resources, and increasing public concern for safety and quality, leading to the evolution of simulation-based training in surgery. Of the various simulators, low-fidelity simulators are widely used to practice surgical skills such as sutures because they are portable, inexpensive, and easy to use without requiring complicated settings. However, since low-fidelity simulators do not offer any teaching information, trainees do self-practice with them, referring to textbooks or videos, which are insufficient to learn open surgical procedures. This study aimed to develop a new suture training system for open surgery that provides trainees with the three-dimensional information of exemplary procedures performed by experts and allows them to observe and imitate the procedures during self-practice. The proposed system consists of a motion capture system of surgical instruments and a three-dimensional replication system of captured procedures on the surgical field. Motion capture of surgical instruments was achieved inexpensively by using cylindrical augmented reality (AR) markers, and replication of captured procedures was realized by visualizing them three-dimensionally at the same position and orientation as captured, using an AR device. For subcuticular interrupted suture, it was confirmed that the proposed system enabled users to observe experts’ procedures from any angle and imitate them by manipulating the actual surgical instruments during self-practice. We expect that this training system will contribute to developing a novel surgical training method that enables trainees to learn surgical skills by themselves in the absence of experts.


2019 ◽  
Vol 18 (6) ◽  
pp. e2690 ◽  
Author(s):  
F. Porpiglia ◽  
E. Checcucci ◽  
D. Amparore ◽  
F. Piramide ◽  
P. Verri ◽  
...  

2018 ◽  
Vol 218 ◽  
pp. 04012
Author(s):  
Finsa Nurpandi ◽  
Agung Gumelar

One of chemistry is the chemical element that is represented by the symbol on the periodic table. The low level of activity, interest, and the result of chemistry learning in school is caused by the students generally having difficulty in solving problems related to chemical reactions. In addition, most of the chemical concepts are abstract so it is difficult to imagine the structure of molecules clearly. Augmented Reality can integrate digital elements with the real world in real time and follow the circumstances surrounding environment. Augmented Reality can provide a new more interactive concept in the learning process because users can directly interact naturally. By using Augmented Reality, the atoms in the periodic table will be scanned using a camera from an Android-based smartphone that has installed this app. The scan results are then compared with existing data and will show the molecular structure in three-dimensional form. Users can also observe reactions between atoms by combining multiple markers simultaneously. Augmented Reality application is built using the concept of user-centered design and Unity with personal license as development tools. By using this app, studying chemical reactions no longer requires a variety of chemicals that could be harmful to users.


2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Bernhard Jenny ◽  
Kadek Ananta Satriadi ◽  
Yalong Yang ◽  
Christopher R. Austin ◽  
Simond Lee ◽  
...  

<p><strong>Abstract.</strong> Augmented reality (AR) and virtual reality (VR) technology are increasingly used for the analysis and visualisation of geospatial data. It has become simple to create an immersive three-dimensional AR or VR map with a combination of game engines (e.g., Unity), software development kits for streaming and rendering geospatial data (e.g., Mapbox), and affordable hardware (e.g., HTC Vive). However, it is not clear how to best interact with geospatial visualisations in AR and VR. For example, there are no established standards to efficiently zoom and pan, select map features, or place markers on AR and VR maps. In this paper, we explore interaction with AR and VR maps using gestures and handheld controllers.</p><p>As for gesture-controlled interaction, we present the results of recent research projects exploring how body gestures can control basic AR and VR map operations. We use motion-tracking controllers (e.g., Leap Motion) to capture and interpret gestures. We conducted a set of user studies to identify, explore and compare various gestures for controlling map-related operations. This includes, for example, mid-air hand gestures for zooming and panning (Satriadi et al. 2019), selecting points of interest, adjusting the orientation of maps, or placing markers on maps. Additionally, we present novel VR interfaces and interaction methods for controlling the content of maps with gestures.</p><p>As for handheld controllers, we discuss interaction with exocentric globes, egocentric globes (where the user stands inside a large virtual globe), flat maps, and curved maps in VR. We demonstrate controller-based interaction for adjusting the centre of world maps displayed on these four types of projection surfaces (Yang et al. 2018), and illustrate the utility of interactively movable VR maps by the example of three-dimensional origin-destination flow maps (Yang et al. 2019).</p>


2018 ◽  
Author(s):  
Uri Korisky ◽  
Rony Hirschhorn ◽  
Liad Mudrik

Notice: a peer-reviewed version of this preprint has been published in Behavior Research Methods and is available freely at http://link.springer.com/article/10.3758/s13428-018-1162-0Continuous Flash Suppression (CFS) is a popular method for suppressing visual stimuli from awareness for relatively long periods. Thus far, it has only been used for suppressing two-dimensional images presented on-screen. We present a novel variant of CFS, termed ‘real-life CFS’, with which the actual immediate surroundings of an observer – including three-dimensional, real life objects – can be rendered unconscious. Real-life CFS uses augmented reality goggles to present subjects with CFS masks to their dominant eye, leaving their non-dominant eye exposed to the real world. In three experiments we demonstrate that real objects can indeed be suppressed from awareness using real-life CFS, and that duration suppression is comparable that obtained using the classic, on-screen CFS. We further provide an example for an experimental code, which can be modified for future studies using ‘real-life CFS’. This opens the gate for new questions in the study of consciousness and its functions.


Author(s):  
Yahya Rasheed Alameer

  The purpose of the research is to determine the effect of the difference in the mode of presentation of the enhanced reality models in the development of the cognitive achievement of secondary students in Jazan region in computer science, the researcher used quasi-experimental approach in comparing the 2D image models of Augmented reality to the first experimental group, and teaching the pattern of 3D image models of Augmented reality of the second experimental group, to ascertain the hypotheses of the research and to reveal the relationship between the independent variable and the dependent variable, the sample consisted of (60) students: (30) students in the first experimental group, which was studied using the two-dimensional Augmented Reality models, And (30) students in the second experimental group, which was studied using the pattern of Augmented Reality three-dimensional, the results showed that there were statistically significant differences at (α≤05.0) between the mean scores of the students of the first experimental groups studied using the two-dimensional Augmented Reality models, the second experiment, which was studied using the Augmented three-dimensional image models, in the post-application to test cognitive achievement, for the second experimental group studied using the three-dimensional Augmented Reality models, In the light of the results, recommendations and suggestions were made to develop the cognitive achievement of secondary students in computer and various subjects.    


Sign in / Sign up

Export Citation Format

Share Document