parallel tasks
Recently Published Documents


TOTAL DOCUMENTS

238
(FIVE YEARS 43)

H-INDEX

22
(FIVE YEARS 3)

Author(s):  
A. David Redish ◽  
Adam Kepecs ◽  
Lisa M. Anderson ◽  
Olivia L. Calvin ◽  
Nicola M. Grissom ◽  
...  

We propose a new conceptual framework (computational validity) for translation across species and populations based on the computational similarity between the information processing underlying parallel tasks. Translating between species depends not on the superficial similarity of the tasks presented, but rather on the computational similarity of the strategies and mechanisms that underlie those behaviours. Computational validity goes beyond construct validity by directly addressing questions of information processing. Computational validity interacts with circuit validity as computation depends on circuits, but similar computations could be accomplished by different circuits. Because different individuals may use different computations to accomplish a given task, computational validity suggests that behaviour should be understood through the subject's point of view; thus, behaviour should be characterized on an individual level rather than a task level. Tasks can constrain the computational algorithms available to a subject and the observed subtleties of that behaviour can provide information about the computations used by each individual. Computational validity has especially high relevance for the study of psychiatric disorders, given the new views of psychiatry as identifying and mediating information processing dysfunctions that may show high inter-individual variability, as well as for animal models investigating aspects of human psychiatric disorders. This article is part of the theme issue ‘Systems neuroscience through the lens of evolutionary theory’.


2021 ◽  
Vol 2091 (1) ◽  
pp. 012038
Author(s):  
M F Karavay ◽  
A M Mikhailov

Abstract The paper discusses On-Board Computing Control Systems (OBCS) in astronautics, avionics, autonomous mobile devices, robotics, weapons control and multi-core microprocessors. This is sort of a “backbone”, which unites many sensors, calculators, control and executive devices. The architecture of these networks was developed some 30-40 years ago. At that time, these systems met the technical conditions in terms of dynamics and reliability. Nowadays, these systems must perform their functions for 10 to 15 years without maintenance. The performance of system networks must be high enough to solve such tasks as monitoring “swarms” that comprise hundreds of objects or work as a “garbage collectors” in space orbits. Nevertheless modern system networks continue to be based on bus or multi-bus architectures. Since these systems are serial for active nodes, a multi-bus solution is a main way to increase the performance of networks by using very high frequencies that amount to 2 ÷ 4 GHz. It’s an extensive path of development, which is problematic. More acceptable would be an intensive path of development, which, in electronics and computer engineering, is associated with the parallelism of task execution. It means that the operating frequencies may not be ultra-high, not exceeding that of modern devices for frequencies of 10 – 600 MHz. However, such devices should work in a parallel mode. The paper proposes a new approach to designing of heterogeneous parallel control system networks, solving parallel tasks, and a conflict-free management of “passive” nodes. To the best of our knowledge, such control system networks are not available as yet.


Author(s):  
Takuma Hikida ◽  
Hiroki Nishikawa ◽  
Hiroyuki Tomiyama

Dynamic scheduling of parallel tasks is one of the efficient techniques to achieve high performance in multicore systems. Most existing algorithms for dynamic task scheduling assume that a task runs on one of the multiple cores or a fixed number of cores. Existing researches on dynamic task scheduling methods have evaluated their methods in different experimental environments and models. In this paper, the dynamic task scheduling methods are systematically rearranged and evaluated.


Author(s):  
A. F. Zadorozhny ◽  
V. A. Melent’ev

The aspects of topological compatibility of parallel computing systems and tasks are investigated in the present contribution. Based on the original topological model of parallel computations and on the unconventional graph description by its projections, the introduction of appropriate indexes is proposed and elucidated. On the example of hypercubic computing system (CS) and tasks with ring and star information topologies, we demonstrate the determination of indexes and their use in a comparative analysis of the applicability of interconnect with a given topology to solve the tasks with the same and different types of information topologies.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhao Huiqi ◽  
Abdullah Khan ◽  
Xu Qiang ◽  
Shah Nazir ◽  
Yasir Ali ◽  
...  

Crowdsourcing in simple words is the outsourcing of a task to an online market to be performed by a diverse group of crowds in order to utilize human intelligence. Due to online labor markets and performing parallel tasks, the crowdsourcing activity is time- and cost-efficient. During crowdsourcing activity, selecting the proper labeled tasks and assigning them to an appropriate worker are a challenge for everyone. A mechanism has been proposed in the current study for assigning the task to the workers. The proposed mechanism is a multicriteria-based task assignment (MBTA) mechanism for assigning the task to the most suitable worker. This mechanism uses approaches for weighting the criteria and ranking the workers. These MCDM methods are Criteria Importance Through Intercriteria Correlation (CRITIC) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Criteria have been made for the workers based on the identified features in the literature. Weight has been assigned to these selected features/criteria with the help of the CRITIC method. The TOPSIS method has been used for the evaluation of workers, with the help of which the ranking of workers is performed in order to get the most suitable worker for the selected tasks to be performed. The proposed work is novel in several ways; for example, the existing methods are mostly based on single criterion or some specific criteria, while this work is based on multiple criteria including all the important features. Furthermore, it is also identified from the literature that none of the authors used MCDM methods for task assignment in crowdsourcing before this research.


Author(s):  
M. Raviraja Holla ◽  
Alwyn R. Pais ◽  
D. Suma

The logistic map is a class of chaotic maps. It is still in use in image cryptography. The logistic map cryptosystem has two stages, namely permutation, and diffusion. These two stages being computationally intensive, the permutation relocates the pixels, whereas the diffusion rescales them. The research on refining the logistic map is progressing to make the encryption more secure. Now there is a need to improve its efficiency to enable such models to fit for high-speed applications. The new invention of accelerators offers efficiency. But the inherent data dependencies hinder the use of accelerators. This paper discusses the novelty of identifying independent data-parallel tasks in a logistic map, handing them over to the accelerators, and improving their efficiency. Among the two accelerator models proposed, the first one achieves peak efficiency using coalesced memory access. The other cryptosystem further improves performance at the cost of more execution resources. In this investigation, it is noteworthy that the parallelly accelerated logistic map achieved a significant speedup to the larger grayscale image used. The objective security estimates proved that the two stages of the proposed systems progressively ensure security.


2021 ◽  
Vol 290 (3) ◽  
pp. 946-955
Author(s):  
Aleksandr Pirogov ◽  
Evgeny Gurevsky ◽  
André Rossi ◽  
Alexandre Dolgui

2021 ◽  
Vol 12 ◽  
Author(s):  
Rebecca Kazinka ◽  
Angus W. MacDonald ◽  
A. David Redish

In the WebSurf task, humans forage for videos paying costs in terms of wait times on a time-limited task. A variant of the task in which demands during the wait time were manipulated revealed the role of attention in susceptibility to sunk costs. Consistent with parallel tasks in rodents, previous studies have found that humans (undergraduates measured in lab) preferred shorter delays, but waited longer for more preferred videos, suggesting that they were treating the delays economically. In an Amazon Mechanical Turk (mTurk) sample, we replicated these predicted economic behaviors for a majority of participants. In the lab, participants showed susceptibility to sunk costs in this task, basing their decisions in part on time they have already waited, which we also observed in the subset of the mTurk sample that behaved economically. In another version of the task, we added an attention check to the wait phase of the delay. While that attention check further increased the proportion of subjects with predicted economic behaviors, it also removed the susceptibility to sunk costs. These findings have important implications for understanding how cognitive processes, such as the deployment of attention, are key to driving re-evaluation and susceptibility to sunk costs.


Author(s):  
Linda Onnasch ◽  
Terpsichore Panayotidis

Social loafing describes a phenomenon in human-human interaction of reduced effort when working in a team compared to working individually. With the increasing growth of human-robot teams, studying potential social loafing effects in human-robot interaction seems increasingly relevant. In a laboratory experiment, participants worked on two parallel tasks with a human or a robotic partner. The primary task was once completed coactively and once collectively with the respective partner and a shared task output. Performance measures revealed no effects regarding partner or working condition. However, subjective data revealed that participants invested the least effort when working collectively with the robot compared to working collectively with a human or in the coactive conditions. At the same time, the robot’s performance was perceived as worse compared to the human confederate’s performance. Based on the results we discuss that interacting with robots shares social facets of human teamwork but does not resemble it.


Sign in / Sign up

Export Citation Format

Share Document