scholarly journals Discriminable human gaze patterns for solid objects versus 2-D and 3-D pictures of those objects

2021 ◽  
Vol 21 (9) ◽  
pp. 2689
Author(s):  
Grant Fairchild ◽  
Osman Kavcar ◽  
Michael Rudd ◽  
Rachael Roach ◽  
Michael Gomez ◽  
...  
Keyword(s):  
Author(s):  
Ryuichi Iwata ◽  
Takeo Kajishima ◽  
Shintaro Takeuchi

In the present study, bubble-particle interactions in suspensions are investigated by a coupled immersed-boundary and volume-of-fluid method (IB-VOF method), which is proposed by the present authors. The validity of the numerical method is examined through simulations of a rising bubble in a liquid and a falling particle in a liquid. Dilute particle-laden flows and a gas-liquid-solid flow involving solid particles and bubbles of comparable sizes to one another (Db/Dp = 1) are simulated. Drag coefficients of particles in particle-laden flows are estimated and flow fields involving multiple particles and a bubble are demonstrated.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jordan Navarro ◽  
Otto Lappi ◽  
François Osiurak ◽  
Emma Hernout ◽  
Catherine Gabaude ◽  
...  

AbstractActive visual scanning of the scene is a key task-element in all forms of human locomotion. In the field of driving, steering (lateral control) and speed adjustments (longitudinal control) models are largely based on drivers’ visual inputs. Despite knowledge gained on gaze behaviour behind the wheel, our understanding of the sequential aspects of the gaze strategies that actively sample that input remains restricted. Here, we apply scan path analysis to investigate sequences of visual scanning in manual and highly automated simulated driving. Five stereotypical visual sequences were identified under manual driving: forward polling (i.e. far road explorations), guidance, backwards polling (i.e. near road explorations), scenery and speed monitoring scan paths. Previously undocumented backwards polling scan paths were the most frequent. Under highly automated driving backwards polling scan paths relative frequency decreased, guidance scan paths relative frequency increased, and automation supervision specific scan paths appeared. The results shed new light on the gaze patterns engaged while driving. Methodological and empirical questions for future studies are discussed.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


2017 ◽  
Vol 124 (2) ◽  
pp. 223-236 ◽  
Author(s):  
Fares Alnajar ◽  
Theo Gevers ◽  
Roberto Valenti ◽  
Sennay Ghebreab

2020 ◽  
Vol 25 (2) ◽  
pp. 338-357
Author(s):  
Cornelius Berthold

AbstractKoran manuscripts that fit comfortably within the palm of one’s hand are known as early as the 10th century CE.For the sake of convenience, all dates will be given in the common era (CE) without further mention, and not in the Islamic or Hijra calendar. Their minute and sometimes barely legible script is clearly not intended for comfortable reading. Instead, recent scholarship suggests that the manuscripts were designed to be worn on the body like pendants or fastened to military flag poles. This is corroborated by some preserved cases for these books which feature lugs to attach a cord or chain, but also their rare occurrence in contemporary textual sources. While pendant Korans in rectangular codex form exist, the majority were produced as codices in the shape of an octagonal prism, and others as scrolls that could be rolled up into a cylindrical form. Both resemble the shapes of similarly dated and pre-Islamic amulets or amulet cases. Building on recent scholarship, I will argue in this article that miniature or pendant Koran manuscripts were produced in similar forms and sizes because of comparable modes of usage, but not necessarily by a deliberate imitation of their amuletic ‘predecessors’. The manuscripts’ main functions did not require them to be read or even opened; some of their cases were in fact riveted shut. Accordingly, the haptic feedback they gave to their owners when they carried or touched them was not one of regular books but one of solid objects (like amulets) or even jewellery, which then reinforced this practice.


Sign in / Sign up

Export Citation Format

Share Document