HTRDP evaluations on Chinese information processing and intelligent human-machine interface

2007 ◽  
Vol 1 (1) ◽  
pp. 58-93 ◽  
Author(s):  
Qun Liu ◽  
Xiangdong Wang ◽  
Hong Liu ◽  
Le Sun ◽  
Sheng Tang ◽  
...  
2020 ◽  
Vol 22 (3) ◽  
pp. 81
Author(s):  
Tulis Jojok Suryono ◽  
Sudarno Sudarno ◽  
Sigit Santoso

Reactor protection systems (RPS) transform process variable signals from the sensors into initiation and actuation signals to trip the reactor if the signal's value exceeds the predefined trip setpoints of the RPS. Information on the current value of the process variables signals and the trip setpoint should be displayed properly on the visual display unit (VDU) in order to maintain the situation awareness of the operators in main control rooms (MCR). In addition, it is also helpful for them to investigate the cause of an accident after the reactor trip and to mitigate the accident based on the appropriate emergency operating procedures. This paper investigates how the information is processed in the RPS of Experimental Power Reactor (EPR) based on high temperature reactor (HTR) technology, and how the information is displayed on the human machine interface (HMI) of the MCR of the EPR. It is conducted by classifying the RPS into three layers based on its components and their functions, followed by the investigation of the type and the information processing in each layer. The results show that the form of the information has been changed throughout the RPS, started from the sensors and until it is displayed on the VDU. The results of the investigation are necessary to understand the concept of RPS, especially for new operators, and to prepare the mitigation actions based on the process variable that cause the reactor trip.Keywords: Experimental power reactor, Reactor protection system, Human machine interface, Information processing, Situation awareness


1990 ◽  
Author(s):  
B. Bly ◽  
P. J. Price ◽  
S. Park ◽  
S. Tepper ◽  
E. Jackson ◽  
...  

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 687
Author(s):  
Jinzhen Dou ◽  
Shanguang Chen ◽  
Zhi Tang ◽  
Chang Xu ◽  
Chengqi Xue

With the development and promotion of driverless technology, researchers are focusing on designing varied types of external interfaces to induce trust in road users towards this new technology. In this paper, we investigated the effectiveness of a multimodal external human–machine interface (eHMI) for driverless vehicles in virtual environment, focusing on a two-way road scenario. Three phases of identifying, decelerating, and parking were taken into account in the driverless vehicles to pedestrian interaction process. Twelve eHMIs are proposed, which consist of three visual features (smile, arrow and none), three audible features (human voice, warning sound and none) and two physical features (yielding and not yielding). We conducted a study to gain a more efficient and safer eHMI for driverless vehicles when they interact with pedestrians. Based on study outcomes, in the case of yielding, the interaction efficiency and pedestrian safety in multimodal eHMI design was satisfactory compared to the single-modal system. The visual modality in the eHMI of driverless vehicles has the greatest impact on pedestrian safety. In addition, the “arrow” was more intuitive to identify than the “smile” in terms of visual modality.


Author(s):  
Saverio Trotta ◽  
Dave Weber ◽  
Reinhard W. Jungmaier ◽  
Ashutosh Baheti ◽  
Jaime Lien ◽  
...  

Procedia CIRP ◽  
2021 ◽  
Vol 100 ◽  
pp. 488-493
Author(s):  
Florian Beuss ◽  
Frederik Schmatz ◽  
Marten Stepputat ◽  
Fabian Nokodian ◽  
Wilko Fluegge ◽  
...  

Nanoscale ◽  
2021 ◽  
Author(s):  
Qiufan Wang ◽  
Jiaheng Liu ◽  
Guofu Tian ◽  
Daohong Zhang

The rapid development of human-machine interface and artificial intelligence is dependent on flexible and wearable soft devices such as sensors and energy storage systems. One of the key factors for...


2021 ◽  
Vol 13 (8) ◽  
pp. 188
Author(s):  
Marianna Di Gregorio ◽  
Marco Romano ◽  
Monica Sebillo ◽  
Giuliana Vitiello ◽  
Angela Vozella

The use of Unmanned Aerial Systems, commonly called drones, is growing enormously today. Applications that can benefit from the use of fleets of drones and a related human–machine interface are emerging to ensure better performance and reliability. In particular, a fleet of drones can become a valuable tool for monitoring a wide area and transmitting relevant information to the ground control station. We present a human–machine interface for a Ground Control Station used to remotely operate a fleet of drones, in a collaborative setting, by a team of multiple operators. In such a collaborative setting, a major interface design challenge has been to maximize the Team Situation Awareness, shifting the focus from the individual operator to the entire group decision-makers. We were especially interested in testing the hypothesis that shared displays may improve the team situation awareness and hence the overall performance. The experimental study we present shows that there is no difference in performance between shared and non-shared displays. However, in trials when unexpected events occurred, teams using shared displays-maintained good performance whereas in teams using non-shared displays performance reduced. In particular, in case of unexpected situations, operators are able to safely bring more drones home, maintaining a higher level of team situational awareness.


Sign in / Sign up

Export Citation Format

Share Document