Overthere

Author(s):  
Hyunggoog Seo ◽  
Jaedong Kim ◽  
Kwanggyoon Seo ◽  
Bumki Kim ◽  
Junyong Noh

An absolute mid-air pointing technique requires a preprocess called registration that makes the system remember the 3D positions and types of objects in advance. Previous studies have simply assumed that the information is already available because it requires a cumbersome process performed by an expert in a carefully calibrated environment. We introduce Overthere, which allows the user to intuitively register the objects in a smart environment by pointing to each target object a few times. To ensure accurate and coherent pointing gestures made by the user regardless of individual differences between them, we performed a user study and identified a desirable gesture motion for this purpose. In addition, we provide the user with various feedback to help them understand the current registration progress and adhere to required conditions, which will lead to accurate registration results. The user studies show that Overthere is sufficiently intuitive to be used by ordinary people.

PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258103
Author(s):  
Andreas Bueckle ◽  
Kilian Buehling ◽  
Patrick C. Shih ◽  
Katy Börner

Working with organs and extracted tissue blocks is an essential task in many medical surgery and anatomy environments. In order to prepare specimens from human donors for further analysis, wet-bench workers must properly dissect human tissue and collect metadata for downstream analysis, including information about the spatial origin of tissue. The Registration User Interface (RUI) was developed to allow stakeholders in the Human Biomolecular Atlas Program (HuBMAP) to register tissue blocks—i.e., to record the size, position, and orientation of human tissue data with regard to reference organs. The RUI has been used by tissue mapping centers across the HuBMAP consortium to register a total of 45 kidney, spleen, and colon tissue blocks, with planned support for 17 organs in the near future. In this paper, we compare three setups for registering one 3D tissue block object to another 3D reference organ (target) object. The first setup is a 2D Desktop implementation featuring a traditional screen, mouse, and keyboard interface. The remaining setups are both virtual reality (VR) versions of the RUI: VR Tabletop, where users sit at a physical desk which is replicated in virtual space; VR Standup, where users stand upright while performing their tasks. All three setups were implemented using the Unity game engine. We then ran a user study for these three setups involving 42 human subjects completing 14 increasingly difficult and then 30 identical tasks in sequence and reporting position accuracy, rotation accuracy, completion time, and satisfaction. All study materials were made available in support of future study replication, alongside videos documenting our setups. We found that while VR Tabletop and VR Standup users are about three times as fast and about a third more accurate in terms of rotation than 2D Desktop users (for the sequence of 30 identical tasks), there are no significant differences between the three setups for position accuracy when normalized by the height of the virtual kidney across setups. When extrapolating from the 2D Desktop setup with a 113-mm-tall kidney, the absolute performance values for the 2D Desktop version (22.6 seconds per task, 5.88 degrees rotation, and 1.32 mm position accuracy after 8.3 tasks in the series of 30 identical tasks) confirm that the 2D Desktop interface is well-suited for allowing users in HuBMAP to register tissue blocks at a speed and accuracy that meets the needs of experts performing tissue dissection. In addition, the 2D Desktop setup is cheaper, easier to learn, and more practical for wet-bench environments than the VR setups.


2021 ◽  
Vol 28 (4) ◽  
pp. 1-49
Author(s):  
Sultan A. Alharthi ◽  
George E. Raptis ◽  
Christina Katsini ◽  
Igor Dolgov ◽  
Lennart E. Nacke ◽  
...  

In multiplayer collaborative games, players need to coordinate their actions and synchronize their efforts effectively to succeed as a team; thus, individual differences can impact teamwork and gameplay. This article investigates the effects of cognitive styles on teams engaged in collaborative gaming activities. Fifty-four individuals took part in a mixed-methods user study; they were classified as field-dependent (FD) or independent (FI) based on a field-dependent–independent (FD-I) cognitive-style-elicitation instrument. Three groups of teams were formed, based on the cognitive style of each team member: FD-FD, FD-FI, and FI-FI. We examined collaborative gameplay in terms of team performance, cognitive load, communication, and player experience. The analysis revealed that FD-I cognitive style affected the performance and mental load of teams. We expect the findings to provide useful insights on understanding how cognitive styles influence collaborative gameplay.


Author(s):  
Dhavalkumar Thakker ◽  
Fan Yang-Turner ◽  
Dimoklis Despotakis

It is becoming increasingly popular to expose government and citywide sensor data as linked data. Linked data appears to offer a great potential for exploratory search in supporting smart city goals of helping users to learn and make sense of complex and heterogeneous data. However, there are no systematic user studies to provide an insight of how browsing through linked data can support exploratory search. This paper presents a user study that draws on methodological and empirical underpinning from relevant exploratory search studies. The authors have developed a linked data browser that provides an interface for user browsing through several datasets linked via domain ontologies. In a systematic study that is qualitative and exploratory in nature, they have been able to get an insight on central issues related to exploratory search and browsing through linked data. The study identifies obstacles and challenges related to exploratory search using linked data and draws heuristics for future improvements. The authors also report main problems experienced by users while conducting exploratory search tasks, based on which requirements for algorithmic support to address the observed issues are elicited. The approach and lessons learnt can facilitate future work in browsing of linked data, and points at further issues that have to be addressed.


Heritage ◽  
2019 ◽  
Vol 2 (3) ◽  
pp. 2573-2596
Author(s):  
George Raptis ◽  
Christos Sintoris ◽  
Nikolaos Avouris

Cultural heritage (CH) institutions attract wide and heterogeneous audiences, which should be efficiently supported and have access to meaningful CH content. This introduces numerous challenges when delivering such experiences, given that people have different cognitive characteristics which influence the way we process information, experience, behave, and acquire knowledge. Our recent studies provide evidence that human cognition should be considered as a personalization factor within CH contexts, and thus we developed a framework that delivers cognition-centered personalized CH activities. The efficiency and the efficacy of the framework have been successfully assessed through two user studies, but non-technical professionals (e.g., CH designers) may face difficulties when attempting to use it and create personalized CH activities. In this paper, we present DeCACHe, which supports CH designers in creating cognition-centered personalized CH activities throughout different phases of the design lifecycle. We also report a user study with seventeen professional CH designers, who used our tool to design CH activities for people with different cognitive characteristics.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1814
Author(s):  
Yuzhao Liu ◽  
Yuhan Liu ◽  
Shihui Xu ◽  
Kelvin Cheng ◽  
Soh Masuko ◽  
...  

Despite the convenience offered by e-commerce, online apparel shopping presents various product-related risks, as consumers can neither physically see nor try products on themselves. Augmented reality (AR) and virtual reality (VR) technologies have been used to improve the shopping online experience. Therefore, we propose an AR- and VR-based try-on system that provides users a novel shopping experience where they can view garments fitted onto their personalized virtual body. Recorded personalized motions are used to allow users to dynamically interact with their dressed virtual body in AR. We conducted two user studies to compare the different roles of VR- and AR-based try-ons and validate the impact of personalized motions on the virtual try-on experience. In the first user study, the mobile application with the AR- and VR-based try-on is compared to a traditional e-commerce interface. In the second user study, personalized avatars with pre-defined motion and personalized motion is compared to a personalized no-motion avatar with AR-based try-on. The result shows that AR- and VR-based try-ons can positively influence the shopping experience, compared with the traditional e-commerce interface. Overall, AR-based try-on provides a better and more realistic garment visualization than VR-based try-on. In addition, we found that personalized motions do not directly affect the user’s shopping experience.


2011 ◽  
Vol 308-310 ◽  
pp. 1619-1626
Author(s):  
Nan Yin ◽  
Xing Long Zhu ◽  
Xin Zhao ◽  
Shang Gao

When the cylindrical laser shines on the target object, a spot can be obtained, which the edge is a closed curve, marked as C1. The imaging of the curve C1 on the image surface of CCD is a closed curve C2 too. Coordinate system is established to describe the position relationship among camera, image and light source, and to analyze the principle for monocular vision and laser ring to get the information about the object depth. In order to solve the problem and make the above principle clear, the key is to work out the expression for the curve C2 on the image surface of CCD. In order to calculate the closed curve C2 expression, the curve C2 will firstly be divided into two parts, the upper curve and the lower one. According to least-square polynomial, discrete points on the curves of two parts are drawn out, constraints are established and the curve equations are fitted. Then, to verify practicality of this method, a virtual model scene will be created, through which relevant data describing edge of virtual CCD image and that of a virtual spot when the virtual light source alights on the virtual object will be obtained. At last, closed curve equation will be fitted in accordance with data describing edge of virtual image; the position of space object will be fixed by making use of light source equation and closed curve equation; and a contrast will be made between the calculated value and data of the spot edge to prove whether a method to obtain the position of space objects based on monocular vision and laser ring is feasible.


2015 ◽  
Vol 719-720 ◽  
pp. 1191-1197 ◽  
Author(s):  
Jun Zhang ◽  
Long Ye ◽  
Qin Zhang ◽  
Jing Ling Wang

This paper is focused on camera calibration, image matching, both of which are the key issues in three-dimensional (3D) reconstruction. In terms of camera calibration firstly, we adopt the method based on the method proposed by Zhengyou Zhang. In addition to this, it is selective for us to deal with tangential distortion. In respect of image matching, we use the SIFT algorithm, which is invariant to image translation, scaling, rotation, and partially invariant to illumination changes and to affine or 3D projections. It performs well in the follow-up matching the corresponding points. Lastly, we perform 3D reconstruction of the surface of the target object. A Graphical User Interface is designed to help us to realize the key function of binocular stereo vision, with better visualization. Apparently, the entire GUI brings convenience to the follow-up work.


2018 ◽  
Vol 2 (4) ◽  
pp. 76
Author(s):  
Patrick Sunnen ◽  
Béatrice Arend ◽  
Valérie Maquil

In recent years, tangible user interfaces (TUI) have gained in popularity in educational contexts, among others to implement problem-solving and discovery learning science activities. In the context of an interdisciplinary and cross-institutional collaboration, we conducted a multimodal EMCA-based video user study involving a TUI-mediated bicycle mechanics simulation. This article focusses on the discovering work of a group of three students with regard to a particular tangible object (a red button), designed to support participants engagement with the underlying physics aspects and its consequences with regard to their engagement with the targeted mechanics aspects.


Author(s):  
Ruidong Zhang ◽  
Mingyang Chen ◽  
Benjamin Steeper ◽  
Yaxuan Li ◽  
Zihan Yan ◽  
...  

This paper presents SpeeChin, a smart necklace that can recognize 54 English and 44 Chinese silent speech commands. A customized infrared (IR) imaging system is mounted on a necklace to capture images of the neck and face from under the chin. These images are first pre-processed and then deep learned by an end-to-end deep convolutional-recurrent-neural-network (CRNN) model to infer different silent speech commands. A user study with 20 participants (10 participants for each language) showed that SpeeChin could recognize 54 English and 44 Chinese silent speech commands with average cross-session accuracies of 90.5% and 91.6%, respectively. To further investigate the potential of SpeeChin in recognizing other silent speech commands, we conducted another study with 10 participants distinguishing between 72 one-syllable nonwords. Based on the results from the user studies, we further discuss the challenges and opportunities of deploying SpeeChin in real-world applications.


Behaviour ◽  
2012 ◽  
Vol 149 (1) ◽  
pp. 111-132 ◽  
Author(s):  
Péter Pongrácz ◽  
Petra Bánhegyi ◽  
Ádám Miklósi

AbstractDogs can learn effectively from a human demonstrator in detour tests as well as in different kinds of manipulative tasks. In this experiment we used a novel two-action device from which the target object (a ball) was obtained by tilting a tube either by pulling a rope attached to the end of the tube, or by directly pushing the end of the tube. Tube tilting was relatively easy for naïve companion dogs; therefore, the effect of the human demonstration aimed to alter or increase the dogs’ initial preference for tube pushing (according to the behaviour shown by naïve dogs in the absence of a human demonstrator). Our results have shown that subjects preferred the demonstrated action in the two-action test. After having witnessed the tube pushing demonstration, dogs performed significantly more tube pushing than the dogs in the rope pulling demonstration group. In contrast, dogs that observed the rope pulling demonstration, performed significantly more similar actions than the subjects of the other demonstration group. The ratio of rope pulling was significantly higher in the rope pulling demonstration group, than in the No Demo (control) group. The overall success of solving the task was also influenced by the social rank of the dog among its conspecific companions at home. Independently of the type of demonstration, dominant dogs solved the task significantly more often than the subordinate dogs did. There was no such difference in the No Demo group. This experiment has shown that a simple two-action device that does not require excessive pre-training, can be suitable for testing social learning in dogs. However, effects of social rank should be taken into account when social learning in dogs is being studied and tested, because dominant and subordinate dogs perform differently after observing a demonstrator.


Sign in / Sign up

Export Citation Format

Share Document