Advancing In-vehicle Gesture Interactions with Adaptive Hand-Recognition and Auditory Displays

2021 ◽  
Author(s):  
Moustafa Tabbarah ◽  
Yusheng Cao ◽  
Yi Liu ◽  
Myounghoon Jeon
Author(s):  
Pramod Kumar S ◽  
◽  
Narendra T.V ◽  
Vinay N.A ◽  
◽  
...  

1985 ◽  
Author(s):  
B. E. Mulligan ◽  
D. K. McBride ◽  
L. S. Goodman

Author(s):  
W. Peter Colquhoun

Using a task which closely simulated the actual output from a sonar device, the performance of 12 subjects was observed for a total of 115 hr in repeated prolonged monitoring sessions under auditory, visual, and dual-mode display conditions. Despite an underlying basic superiority of signal discriminability on the visual display, and the occurrence of long-term practice effects, detection rate was consistently and substantially higher under the auditory condition, and higher again with the dual-mode display. These results are similar to those obtained by earlier workers using artificial laboratory tasks for shorter periods, and are consistent with the notion that auditory displays have greater attention-gaining capacity in a “vigilance” situation. A further comparison of the auditory and visual displays was then made in an “alerted” situation, where the possible occurrence of a signal was indicated by a warning stimulus in the alternative sensory mode. Ten subjects were observed for a total of 57 hr in these conditions, under which performance was found to be clearly superior with the visual display. Cross-modal correlations of performance indicated the presence of a common factor of signal detectability within subjects. It was concluded that where efficiency both in the initial detection of targets and their subsequent identification and tracking are equally important, the best solution would seem to be to retain both auditory and visual displays and to ensure that these are monitored concurrently.


Author(s):  
Pontus Larsson ◽  
Justyna Maculewicz ◽  
Johan Fagerlönn ◽  
Max Lachmann

The current position paper discusses vital challenges related to the user experience design in unsupervised, highly automated cars. These challenges are: (1) how to avoid motion sickness, (2) how to ensure users’ trust in the automation, (3) how to ensure usability and support the formation of accurate mental models of the automation system, and (4) how to provide a pleasant and enjoyable experience. We argue for that auditory displays have the potential to help solve these issues. While auditory displays in modern vehicles typically make use of discrete and salient cues, we argue that the use of less intrusive continuous sonic interaction could be more beneficial for the user experience.


2013 ◽  
Vol 8 (2) ◽  
pp. 73 ◽  
Author(s):  
Alexander Refsum Jensenius ◽  
Rolf Inge Godøy

<p class="author">The paper presents sonomotiongram, a technique for the creation of auditory displays of human body motion based on motiongrams. A motiongram is a visual display of motion, based on frame differencing and reduction of a regular video recording. The resultant motiongram shows the spatial shape of the motion as it unfolds in time, somewhat similar to the way in which spectrograms visualise the shape of (musical) sound. The visual similarity of motiongrams and spectrograms is the conceptual starting point for the sonomotiongram technique, which explores how motiongrams can be turned into sound using &ldquo;inverse FFT&rdquo;. The paper presents the idea of shape-sonification, gives an overview of the sonomotiongram technique, and discusses sonification examples of both simple and complex human motion.</p>


Sign in / Sign up

Export Citation Format

Share Document