When does explicit expectancy affect task processing?

2009 ◽  
Author(s):  
Peter A. Frensch ◽  
Sabine Schwager ◽  
Robert Gaschler ◽  
Dennis Runger
Keyword(s):  
2008 ◽  
Vol 29 (4) ◽  
pp. 205-216 ◽  
Author(s):  
Stefan Krumm ◽  
Lothar Schmidt-Atzert ◽  
Kurt Michalczyk ◽  
Vanessa Danthiir

Mental speed (MS) and sustained attention (SA) are theoretically distinct constructs. However, tests of MS are very similar to SA tests that use time pressure as an impeding condition. The performance in such tasks largely relies on the participants’ speed of task processing (i.e., how quickly and correctly one can perform the simple cognitive tasks). The present study examined whether SA and MS are empirically the same or different constructs. To this end, 24 paper-pencil and computerized tests were administered to 199 students. SA turned out to be highly related to MS task classes: substitution and perceptual speed. Furthermore, SA showed a very close relationship with the paper-pencil MS factor. The correlation between SA and computerized speed was considerably lower but still high. In a higher-order general speed factor model, SA had the highest loading on the higher-order factor; the higher-order factor explained 88% of SA variance. It is argued that SA (as operationalized with tests using time pressure as an impeding condition) and MS cannot be differentiated, at the level of broad constructs. Implications for neuropsychological assessment and future research are discussed.


Author(s):  
Umar Ibrahim Minhas ◽  
Roger Woods ◽  
Georgios Karakonstantis

AbstractWhilst FPGAs have been used in cloud ecosystems, it is still extremely challenging to achieve high compute density when mapping heterogeneous multi-tasks on shared resources at runtime. This work addresses this by treating the FPGA resource as a service and employing multi-task processing at the high level, design space exploration and static off-line partitioning in order to allow more efficient mapping of heterogeneous tasks onto the FPGA. In addition, a new, comprehensive runtime functional simulator is used to evaluate the effect of various spatial and temporal constraints on both the existing and new approaches when varying system design parameters. A comprehensive suite of real high performance computing tasks was implemented on a Nallatech 385 FPGA card and show that our approach can provide on average 2.9 × and 2.3 × higher system throughput for compute and mixed intensity tasks, while 0.2 × lower for memory intensive tasks due to external memory access latency and bandwidth limitations. The work has been extended by introducing a novel scheduling scheme to enhance temporal utilization of resources when using the proposed approach. Additional results for large queues of mixed intensity tasks (compute and memory) show that the proposed partitioning and scheduling approach can provide higher than 3 × system speedup over previous schemes.


Author(s):  
Lasse Pelzer ◽  
Christoph Naefgen ◽  
Robert Gaschler ◽  
Hilde Haider

AbstractDual-task costs might result from confusions on the task-set level as both tasks are not represented as distinct task-sets, but rather being integrated into a single task-set. This suggests that events in the two tasks are stored and retrieved together as an integrated memory episode. In a series of three experiments, we tested for such integrated task processing and whether it can be modulated by regularities between the stimuli of the two tasks (across-task contingencies) or by sequential regularities within one of the tasks (within-task contingencies). Building on the experimental approach of feature binding in action control, we tested whether the participants in a dual-tasking experiment will show partial-repetition costs: they should be slower when only the stimulus in one of the two tasks is repeated from Trial n − 1 to Trial n than when the stimuli in both tasks repeat. In all three experiments, the participants processed a visual-manual and an auditory-vocal tone-discrimination task which were always presented concurrently. In Experiment 1, we show that retrieval of Trial n − 1 episodes is stable across practice if the stimulus material is drawn randomly. Across-task contingencies (Experiment 2) and sequential regularities within a task (Experiment 3) can compete with n − 1-based retrieval leading to a reduction of partial-repetition costs with practice. Overall the results suggest that participants do not separate the processing of the two tasks, yet, within-task contingencies might reduce integrated task processing.


1997 ◽  
Vol 63 (616) ◽  
pp. 4417-4423
Author(s):  
Yoshiteru NAKA ◽  
Masahiko ONOSATO ◽  
Shinichiro SUMIOKA ◽  
Kazuaki IWATA

2021 ◽  
Vol 12 (5) ◽  
pp. 233-254
Author(s):  
D. Yu. Bulgakov ◽  

A method for solving resource-intensive tasks that actively use the CPU, when the computing resources of one server become insufficient, is proposed. The need to solve this class of problems arises when using various machine learning models in a production environment, as well as in scientific research. Cloud computing allows you to organize distributed task processing on virtual servers that are easy to create, maintain, and replicate. An approach based on the use of free software implemented in the Python programming language is justified and proposed. The resulting solution is considered from the point of view of the theory of queuing. The effect of the proposed approach in solving problems of face recognition and analysis of biomedical signals is described.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Wenjuan Li ◽  
Shihua Cao ◽  
Keyong Hu ◽  
Jian Cao ◽  
Rajkumar Buyya

The cloud-fog-edge hybrid system is the evolution of the traditional centralized cloud computing model. Through the combination of different levels of resources, it is able to handle service requests from terminal users with a lower latency. However, it is accompanied by greater uncertainty, unreliability, and instability due to the decentralization and regionalization of service processing, as well as the unreasonable and unfairness in resource allocation, task scheduling, and coordination, caused by the autonomy of node distribution. Therefore, this paper introduces blockchain technology to construct a trust-enabled interaction framework in a cloud-fog-edge environment, and through a double-chain structure, it improves the reliability and verifiability of task processing without a big management overhead. Furthermore, in order to fully consider the reasonability and load balance in service coordination and task scheduling, Berger’s model and the conception of service justice are introduced to perform reasonable matching of tasks and resources. We have developed a trust-based cloud-fog-edge service simulation system based on iFogsim, and through a large number of experiments, the performance of the proposed model is verified in terms of makespan, scheduling success rate, latency, and user satisfaction with some classical scheduling models.


Sign in / Sign up

Export Citation Format

Share Document