scholarly journals Ensuring the Consistency between User Requirements and Task Models: A Behavior-Based Automated Approach

2020 ◽  
Vol 4 (EICS) ◽  
pp. 1-32
Author(s):  
Thiago Rocha Silva ◽  
Marco Winckler ◽  
Hallvard Trætteberg
2010 ◽  
Vol 34-35 ◽  
pp. 482-486
Author(s):  
De Hai Chen ◽  
Yu Ming Liang

This paper describes indoor mobile robot covering path to avoid obstacle based on behavior fuzzy controller. The robot measures the distance to obstacle with ultrasonic sensors and infrared range sensors, and the distance is the input parameter of the behavior-based fuzzy controller. The behavior architecture has three levels behaviors: emergency behavior, obstacle avoidance behavior, and task oriented behavior. The task oriented behavior is the highest level behavior, and has two subtasks: wall following and path covering. The middle level behavior is obstacle avoidance. The lowest level is an emergency behavior, which is the highest priority behavior. The simulation result demonstrates that each behavior works correctly.


1999 ◽  
Vol 8 (3) ◽  
pp. 355-365 ◽  
Author(s):  
Anne Parent

This paper describes the creation of a hypothetical virtual art exhibit using a virtual environment task analysis tool. The Virtual Environment Task Analysis Tool (VETAT-ART) is a paper-and-pencil tool developed to provide structure and guidance to the needs-analysis process that is essential to the development of lifelike virtual exhibits. To illustrate its potential usefulness, VETAT-ART is applied to the design of a historical art exhibit. The first part of the article draws a general profile of our sample application. It introduces organizational-, user-, and task-related factors typically collected when designing or modifying most computer-based systems. The second part of the paper presents the user and task requirements unique to the creation of a virtual environment. Task requirements determine the contents of various storyboards and draw the architecture of the environment. Storyboards describe the images, sounds, sensations, and scents to be found in individual galleries. The architecture establishes a sensible order in which the galleries may be accessed. User requirements determine the human sensory, cognitive, and ergonomic needs relevant to the key activities museum visitors are expected to perform. Activities include visualization and inspection, exploration, and the manipulation of virtual artifacts. Eight goal-categories define user requirements. Visual, auditory, and haptic requirements are determined by humansensory issues. Features relevant to memory capacity, information load, and mental models describe cognitive issues. Physical and physiological considerations are determined by human ergonomics. The third section of the paper suggests usability goals and possible measures of success. In conclusion, limitations and potential extensions of the tool are discussed.


Author(s):  
Joanna Kołodziej ◽  
Fatos Xhafa

Modern approaches to modeling user requirements on resource and task allocation in hierarchical computational grids Tasks scheduling and resource allocation are among crucial issues in any large scale distributed system, including Computational Grids (CGs). These issues are commonly investigated using traditional computational models and resolution methods that yield near-optimal scheduling strategies. One drawback of such approaches is that they cannot effectively tackle the complex nature of CGs. On the one hand, such systems account for many administrative domains with their own access policies, user privileges, etc. On the other, CGs have hierarchical nature and therefore any computational model should be able to effectively express the hierarchical architecture in the optimization model. Recently, researchers have been investigating the use of game theory for modeling user requirements regarding task and resource allocation in grid scheduling problems. In this paper we present two general non-cooperative game approaches, namely, the symmetric non-zero sum game and the asymmetric Stackelberg game for modeling grid user behavior defined as user requirements. In our game-theoretic approaches we are able to cast new requirements arising in allocation problems, such as asymmetric users relations, security and reliability restrictions in CGs. For solving the games, we designed and implemented GA-based hybrid schedulers for approximating the equilibrium points for both games. The proposed hybrid resolution methods are experimentally evaluated through the grid simulator under heterogeneity, and large-scale and dynamics conditions. The relative performance of the schedulers is measured in terms of the makespan and flowtime metrics. The experimental analysis showed high efficiency of meta-heuristics in solving the game-based models, especially in the case of an additional cost of secure task scheduling to be paid by the users.


Author(s):  
Margreet Vogelzang ◽  
Christiane M. Thiel ◽  
Stephanie Rosemann ◽  
Jochem W. Rieger ◽  
Esther Ruigendijk

Purpose Adults with mild-to-moderate age-related hearing loss typically exhibit issues with speech understanding, but their processing of syntactically complex sentences is not well understood. We test the hypothesis that listeners with hearing loss' difficulties with comprehension and processing of syntactically complex sentences are due to the processing of degraded input interfering with the successful processing of complex sentences. Method We performed a neuroimaging study with a sentence comprehension task, varying sentence complexity (through subject–object order and verb–arguments order) and cognitive demands (presence or absence of a secondary task) within subjects. Groups of older subjects with hearing loss ( n = 20) and age-matched normal-hearing controls ( n = 20) were tested. Results The comprehension data show effects of syntactic complexity and hearing ability, with normal-hearing controls outperforming listeners with hearing loss, seemingly more so on syntactically complex sentences. The secondary task did not influence off-line comprehension. The imaging data show effects of group, sentence complexity, and task, with listeners with hearing loss showing decreased activation in typical speech processing areas, such as the inferior frontal gyrus and superior temporal gyrus. No interactions between group, sentence complexity, and task were found in the neuroimaging data. Conclusions The results suggest that listeners with hearing loss process speech differently from their normal-hearing peers, possibly due to the increased demands of processing degraded auditory input. Increased cognitive demands by means of a secondary visual shape processing task influence neural sentence processing, but no evidence was found that it does so in a different way for listeners with hearing loss and normal-hearing listeners.


2019 ◽  
Vol 62 (12) ◽  
pp. 4417-4432 ◽  
Author(s):  
Carola de Beer ◽  
Jan P. de Ruiter ◽  
Martina Hielscher-Fastabend ◽  
Katharina Hogrefe

Purpose People with aphasia (PWA) use different kinds of gesture spontaneously when they communicate. Although there is evidence that the nature of the communicative task influences the linguistic performance of PWA, so far little is known about the influence of the communicative task on the production of gestures by PWA. We aimed to investigate the influence of varying communicative constraints on the production of gesture and spoken expression by PWA in comparison to persons without language impairment. Method Twenty-six PWA with varying aphasia severities and 26 control participants (CP) without language impairment participated in the study. Spoken expression and gesture production were investigated in 2 different tasks: (a) spontaneous conversation about topics of daily living and (b) a cartoon narration task, that is, retellings of short cartoon clips. The frequencies of words and gestures as well as of different gesture types produced by the participants were analyzed and tested for potential effects of group and task. Results Main results for task effects revealed that PWA and CP used more iconic gestures and pantomimes in the cartoon narration task than in spontaneous conversation. Metaphoric gestures, deictic gestures, number gestures, and emblems were more frequently used in spontaneous conversation than in cartoon narrations by both participant groups. Group effects show that, in both tasks, PWA's gesture-to-word ratios were higher than those for the CP. Furthermore, PWA produced more interactive gestures than the CP in both tasks, as well as more number gestures and pantomimes in spontaneous conversation. Conclusions The current results suggest that PWA use gestures to compensate for their verbal limitations under varying communicative constraints. The properties of the communicative task influence the use of different gesture types in people with and without aphasia. Thus, the influence of communicative constraints needs to be considered when assessing PWA's multimodal communicative abilities.


Sign in / Sign up

Export Citation Format

Share Document