A Study of the Effect of Varying Visual Occlusion and Task Duration Conditions on Driver Behavior and Performance while Using a Secondary Task Human-Machine Interface

Author(s):  
David H. Weir ◽  
Dean P. Chiang ◽  
Aaron M. Brooks
Author(s):  
Pradnya Sulas Borkar ◽  
Prachi U. Chanana ◽  
Simranjeet Kaur Atwal ◽  
Tanvi G. Londe ◽  
Yash D. Dalal

The new era of computing is internet of things (IoT). Internet of things (IoT) represents the ability of network devices to sense and collect data from around the world and then share that data across the internet where it can be processed and utilize for different converging systems. Most of the organisation and industries needs up-to-date data and information about the hardware machines. In most industries, HMI (human-machine interface) is used mostly for connecting the hardware devices. In many manufacturing industries, HMI is the only way to access information about the configuration and performance of machine. It is difficult to take the history of data or data analysis of HMI automatically. HMI is used once per machine which is quite hard to handle. Due to frequent use of HMI, it leads to loss of time, high costs, and fragility, and it needs to be replaced, which was found to be costlier. An internet of things (IOT) is a good platform where all the machines in the industry are able to be handled from a single IoT-based web portal.


2014 ◽  
Vol 4 (1) ◽  
pp. 59-76 ◽  
Author(s):  
Ericka Janet Rechy-Ramirez ◽  
Huosheng Hu

This paper presents a bio-signal based human machine interface (HMI) for hands-free control of an electric powered wheelchair. In this novel HMI, an Emotive EPOC sensor is deployed to detect facial expressions and head movements of users, which are then recognized and converted to four uni-modal control modes and two bi-modal control modes to operate the wheelchair. Nine facial expressions and up-down head movements have been defined and tested, so that users can select some of these facial expressions and head movements to form the six control commands. The proposed HMI is user-friendly and allows users to select one of available control modes according to their comfort. Experiments are conducted to show the feasibility and performance of the proposed HMI.


Author(s):  
Margreet Vogelzang ◽  
Christiane M. Thiel ◽  
Stephanie Rosemann ◽  
Jochem W. Rieger ◽  
Esther Ruigendijk

Purpose Adults with mild-to-moderate age-related hearing loss typically exhibit issues with speech understanding, but their processing of syntactically complex sentences is not well understood. We test the hypothesis that listeners with hearing loss' difficulties with comprehension and processing of syntactically complex sentences are due to the processing of degraded input interfering with the successful processing of complex sentences. Method We performed a neuroimaging study with a sentence comprehension task, varying sentence complexity (through subject–object order and verb–arguments order) and cognitive demands (presence or absence of a secondary task) within subjects. Groups of older subjects with hearing loss ( n = 20) and age-matched normal-hearing controls ( n = 20) were tested. Results The comprehension data show effects of syntactic complexity and hearing ability, with normal-hearing controls outperforming listeners with hearing loss, seemingly more so on syntactically complex sentences. The secondary task did not influence off-line comprehension. The imaging data show effects of group, sentence complexity, and task, with listeners with hearing loss showing decreased activation in typical speech processing areas, such as the inferior frontal gyrus and superior temporal gyrus. No interactions between group, sentence complexity, and task were found in the neuroimaging data. Conclusions The results suggest that listeners with hearing loss process speech differently from their normal-hearing peers, possibly due to the increased demands of processing degraded auditory input. Increased cognitive demands by means of a secondary visual shape processing task influence neural sentence processing, but no evidence was found that it does so in a different way for listeners with hearing loss and normal-hearing listeners.


1990 ◽  
Author(s):  
B. Bly ◽  
P. J. Price ◽  
S. Park ◽  
S. Tepper ◽  
E. Jackson ◽  
...  

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 687
Author(s):  
Jinzhen Dou ◽  
Shanguang Chen ◽  
Zhi Tang ◽  
Chang Xu ◽  
Chengqi Xue

With the development and promotion of driverless technology, researchers are focusing on designing varied types of external interfaces to induce trust in road users towards this new technology. In this paper, we investigated the effectiveness of a multimodal external human–machine interface (eHMI) for driverless vehicles in virtual environment, focusing on a two-way road scenario. Three phases of identifying, decelerating, and parking were taken into account in the driverless vehicles to pedestrian interaction process. Twelve eHMIs are proposed, which consist of three visual features (smile, arrow and none), three audible features (human voice, warning sound and none) and two physical features (yielding and not yielding). We conducted a study to gain a more efficient and safer eHMI for driverless vehicles when they interact with pedestrians. Based on study outcomes, in the case of yielding, the interaction efficiency and pedestrian safety in multimodal eHMI design was satisfactory compared to the single-modal system. The visual modality in the eHMI of driverless vehicles has the greatest impact on pedestrian safety. In addition, the “arrow” was more intuitive to identify than the “smile” in terms of visual modality.


Author(s):  
Saverio Trotta ◽  
Dave Weber ◽  
Reinhard W. Jungmaier ◽  
Ashutosh Baheti ◽  
Jaime Lien ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document