scholarly journals Taxonomia de Falhas em Programas Concorrentes em Elixir

Author(s):  
Matheus Deon Bordignon ◽  
Rodolfo Adamshuk Silva

Computer processing capacity is becoming increasingly insufficientand it encourages the use of concurrent programming to developapplications that reduce the computing time. Due to features suchas communication, synchronization and non-determinism, concurrentprograms may present concurrency-related errors. This paperpresents a defect taxonomy for Elixir concurrent programs consideringthe functions present on Kernel and Task modules. Defectpatterns were identified from the insertion of small disturbancesinto concurrent functions present in a benchmark of concurrentElixir programs. The association between entered defects and concurrentprogramming errors has resulted in defect taxonomy forconcurrent Elixir programs. The defined taxonomy will be used tosupport the definition of criteria and testing tools for concurrentElixir programs.

2021 ◽  
Vol 178 (3) ◽  
pp. 229-266
Author(s):  
Ivan Lanese ◽  
Adrián Palacios ◽  
Germán Vidal

Causal-consistent reversible debugging is an innovative technique for debugging concurrent systems. It allows one to go back in the execution focusing on the actions that most likely caused a visible misbehavior. When such an action is selected, the debugger undoes it, including all and only its consequences. This operation is called a causal-consistent rollback. In this way, the user can avoid being distracted by the actions of other, unrelated processes. In this work, we introduce its dual notion: causal-consistent replay. We allow the user to record an execution of a running program and, in contrast to traditional replay debuggers, to reproduce a visible misbehavior inside the debugger including all and only its causes. Furthermore, we present a unified framework that combines both causal-consistent replay and causal-consistent rollback. Although most of the ideas that we present are rather general, we focus on a popular functional and concurrent programming language based on message passing: Erlang.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shamima Yesmin ◽  
S.M. Zabed Ahmed

Purpose The purpose of this paper is to investigate Library and Information Science (LIS) students’ understanding of infodemic and related terminologies and their ability to categorize COVID-19-related problematic information types using examples from social media platforms. Design/methodology/approach The participants of this study were LIS students from a public-funded university located at the south coast of Bangladesh. An online survey was conducted which, in addition to demographic and study information, asked students to identify the correct definition of infodemic and related terminologies and to categorize the COVID-related problematic social media posts based on their inherent problem characteristics. The correct answer for each definition and task question was assigned a score of “1”, whereas the wrong answer was coded as “0”. The percentages of correctness score for total and each category of definition and task-specific questions were computed. The independent sample t-test and ANOVA were run to examine the differences in total and category-specific scores between student groups. Findings The findings revealed that students’ knowledge concerning the definition of infodemic and related terminologies and the categorization of COVID-19-related problematic social media posts was poor. There was no significant difference in correctness scores between student groups in terms of gender, age and study levels. Originality/value To the best of the authors’ knowledge, this is the first time an effort was made to understand LIS students’ recognition and classification of problematic information. The findings can assist LIS departments in revising and improving the existing information literacy curriculum for students.


2017 ◽  
Vol 11 (1) ◽  
pp. 31-43 ◽  
Author(s):  
Rolf Moeckel ◽  
Leta Huntsinger ◽  
Rick Donnelly

Background: In four-step travel demand models, average trip generation rates are traditionally applied to static household type definitions. In reality, however, trip generation is more heterogeneous with some households making no trips and other households making more than a dozen trips, even if they are of the same household type. Objective: This paper aims at improving trip-generation methods without jumping all the way to an activity-based model, which is a very costly form of modeling travel demand both in terms of development and computer processing time. Method: Two fundamental improvements in trip generation are presented in this paper. First, the definition of household types, which traditionally is based on professional judgment rather than science, is revised to optimally reflect trip generation differences between the household types. For this purpose, over 67 million definitions of household types were analyzed econometrically in a Big-Data exercise. Secondly, a microscopic trip generation module was developed that specifies trip generation individually for every household. Results: This new module allows representing the heterogeneity in trip generation found in reality, with the ability to maintain all household attributes for subsequent models. Even though the following steps in a trip-based model used in this research remained unchanged, the model was improved by using microscopic trip generation. Mode-specific constants were reduced by 9%, and the Root Mean Square Error of the assignment validation improved by 7%.


Author(s):  
Matthew Dale ◽  
Julian F. Miller ◽  
Susan Stepney ◽  
Martin A. Trefzer

The reservoir computing (RC) framework states that any nonlinear, input-driven dynamical system (the reservoir ) exhibiting properties such as a fading memory and input separability can be trained to perform computational tasks. This broad inclusion of systems has led to many new physical substrates for RC. Properties essential for reservoirs to compute are tuned through reconfiguration of the substrate, such as change in virtual topology or physical morphology. As a result, each substrate possesses a unique ‘quality’—obtained through reconfiguration—to realize different reservoirs for different tasks. Here we describe an experimental framework to characterize the quality of potentially any substrate for RC. Our framework reveals that a definition of quality is not only useful to compare substrates, but can help map the non-trivial relationship between properties and task performance. In the wider context, the framework offers a greater understanding as to what makes a dynamical system compute, helping improve the design of future substrates for RC.


1996 ◽  
Vol 6 (2) ◽  
pp. 127-139 ◽  
Author(s):  
Nicoletta Sabadini ◽  
Sebastiano Vigna ◽  
Robert F. C. Walters

In this paper, we propose a new and elegant definition of the class of recursive functions, which is analogous to Kleene's definition but differs in the primitives taken, thus demonstrating the computational power of the concurrent programming language introduced in Walters (1991), Walters (1992) and Khalil and Walters (1993).The definition can be immediately rephrased for any distributive graph in a countably extensive category with products, thus allowing a wide, natural generalization of computable functions.


2019 ◽  
Author(s):  
Zachary L Howard ◽  
Paul Michael Garrett ◽  
Daniel R. Little ◽  
James T. Townsend ◽  
Ami Eidels

Systems Factorial Technology (SFT) is a popular framework for that has been used to investigate processing capacity across many psychological domains over the past 25+ years. To date, it had been assumed that no processing resources are used for sources in which no signal has been presented (i.e., in a location that can contain a signal but does not on a given trial). Hence, response times are purely driven by the ``signal-containing'' location or locations. This assumption is critical to the underlying mathematics of the capacity coefficient measure of SFT. In this manuscript, we show that stimulus locations influence response times even when they contain no signal, and that this influence has repercussions for the interpretation of processing capacity under the SFT framework, particularly in conjunctive (AND) tasks - where positive responses require detection of signals in multiple locations. We propose a modification to the AND task requiring participants to fully identify both target locations on all trials. This modification allows a new coefficient to be derived. We apply the new coefficient to novel experimental data and resolve a previously reported empirical paradox, where observed capacity was limited in an OR detection task but super capacity in an AND detection task. Hence, previously reported differences in processing capacity between OR and AND task designs are likely to have been spurious.


Author(s):  
Sushruta Mishra ◽  
Sunil Kumar Mohapatra ◽  
Brojo Kishore Mishra ◽  
Soumya Sahoo

This chapter describes how cloud computing is an emerging concept combining many fields of computing. The foundation of cloud computing is the delivery of services, software and processing capacity over the Internet, reducing cost, increasing storage, automating systems, decoupling of service delivery from underlying technology, and providing flexibility and mobility of information. However, the actual realization of these benefits is far from being achieved for mobile applications and open many new research questions. Together with an explosive growth of the mobile applications and emerging of cloud computing concept, mobile cloud computing (MCC) has been introduced to be a potential technology for mobile services. With this importance, this chapter provides an overview of mobile cloud computing in which its definitions, architecture, and advantages have been presented. It presents an in-depth knowledge of various aspects of Mobile Cloud Computing (MCC). We give a definition of mobile cloud computing and provide an overview of its features.


Mathematics ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 1986
Author(s):  
María Sierra-Paradinas ◽  
Antonio Alonso-Ayuso ◽  
Francisco Javier Martín-Campo ◽  
Francisco Rodríguez-Calo ◽  
Enrique Lasso

The problem concerning facilities delocation in the retail sector is addressed in this paper by proposing a novel mixed 0-1 linear optimization model. For this purpose, the aim of the problem is to decide whether to close existing stores or consider an alternative type of store management policy aimed at optimizing the profit of the entire retail network. Each management policy has a different repercussion on the final profit of the stores due to the different margins obtained from the customers. Furthermore, closing stores can cause customers to leave the whole retail network according to their behavior. This behavior is brought about through their tendency to abandon this network. There are capacity constraints imposed depending on the number of stores that should stay open and cease operation costs, customer behavior and final prices. These constraints depend on the type of management policy implemented by the store. Due to the commercial requirements concerning customer behavior, a set of non-linear constraints appears in the definition of the model. Classical Fortet inequalities are used in order to linearize the constraints and, therefore, obtain a mixed 0-1 linear optimization model. As a result of the size of the network, border constraints have been imposed to obtain results in a reasonable computing time. The model implementation is done by introducing smart sets of indices to reduce the number of constraints and variables. Finally, the computational results are presented using data from a real-world case study and, additionally, a set of computational experiments using data randomly generated as shown.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1469 ◽  
Author(s):  
Raul Garcia-Martin ◽  
Raul Sanchez-Reillo

Human wrist vein biometric recognition is one of the least used vascular biometric modalities. Nevertheless, it has similar usability and is as safe as the two most common vascular variants in the commercial and research worlds: hand palm vein and finger vein modalities. Besides, the wrist vein variant, with wider veins, provides a clearer and better visualization and definition of the unique vein patterns. In this paper, a novel vein wrist non-contact system has been designed, implemented, and tested. For this purpose, a new contactless database has been collected with the software algorithm TGS-CVBR®. The database, called UC3M-CV1, consists of 1200 near-infrared contactless images of 100 different users, collected in two separate sessions, from the wrists of 50 subjects (25 females and 25 males). Environmental light conditions for the different subjects and sessions have been not controlled: different daytimes and different places (outdoor/indoor). The software algorithm created for the recognition task is PIS-CVBR®. The results obtained by combining these three elements, TGS-CVBR®, PIS-CVBR®, and UC3M-CV1 dataset, are compared using two other different wrist contact databases, PUT and UC3M (best value of Equal Error Rate (EER) = 0.08%), taken into account and measured the computing time, demonstrating the viability of obtaining a contactless real-time-processing wrist system.


Sign in / Sign up

Export Citation Format

Share Document