concurrent execution
Recently Published Documents


TOTAL DOCUMENTS

146
(FIVE YEARS 17)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Majid Ismail Al Hammadi ◽  
Andreas Scheed ◽  
Hasan Alsabri ◽  
Hasan Al Ali ◽  
Yaqoub Al Obaidli ◽  
...  

Abstract Gas SIMOPS is a concurrent execution of two or more activities at same time, i.e., Drilling Operation, Oil Production & Gas Injection on an offshore wellhead tower thereby ensuring uninterrupted oil production and continuous reservoir pressure management from gas injection. The alternative to gas injection in this scenario was gas flaring, which has major environmental and financial impact. Considering continuous presence of personnel on drilling rig working over wellhead tower with high pressure gas injection; extensive Risk Analysis were conducted, and additional control/Mitigation measures were implemented. This initiative also contributed to the zero Gas flaring vision of the company by achieving a huge quantity of CO2 emission reduction. This successful Gas SIMOPS model is already being extended to other fields. To achieve this objective and keeping with 100% HSE, an in-house multi-disciplinary team collaborated and successfully executed Gas SIMOPS for the first time in UAE Offshore. Execution of Gas SIMOPS has brought major economic benefits to the company with additional Gas savings incurred.



2021 ◽  
Vol 2083 (4) ◽  
pp. 042085
Author(s):  
Shanshan Ji

Abstract With the development of Internet and information technology, cloud computing has attracted extensive attention from industry and academia. The large scale of resources, concurrent execution of multiple tasks and dynamic changes of application resource requests make the resource allocation of data center face severe challenges. To solve the problem of low balance of traditional resource allocation, this paper focuses on the resource allocation optimization of data center, and proposes the resource allocation strategy of data center based on cloud computing, so as to complete the effective resource allocation and assignment. This paper also verifies the designed resource allocation method through example research. The research shows that the distribution balance degree of resource allocation strategy based on cloud computing is significantly higher than the control group, which proves that the designed resource allocation strategy can solve the problem of low balance of traditional resource allocation.



2021 ◽  
Author(s):  
Anastasija Nikiforova ◽  
Janis Bicevskis ◽  
Girts Karnitis ◽  
Ivo Oditis ◽  
Zane Bicevska
Keyword(s):  


Author(s):  
Vinicius S. da Silva ◽  
Angelo G. D. Nogueira ◽  
Everton Camargo Lima ◽  
Hiago M. G. A. Rocha ◽  
Matheus S. Serpa ◽  
...  


2021 ◽  
Author(s):  
Pablo Carvalho ◽  
Lúcia Maria De A. Drummond ◽  
Cristiana Bentes

Heterogeneous systems employing CPUs and GPUs are becoming increasingly popular in large-scale data centers and cloud environments. In these platforms, sharing a GPU across different applications is an important feature to improve hardware utilization and system throughput. However, under scenarios where GPUs are competitively shared, some challenges arise. The decision on the simultaneous execution of different kernels is made by the hardware and depends on the kernels resource requirements. Besides that, it is very difficult to understand all the hardware variables involved in the simultaneous execution decisions, in order to describe a formal allocation method. In this work, we studied the impact that kernel resource requirements have in concurrent execution and used machine learning (ML) techniques to infer the interference caused by the concurrent execution, and to classify the slowdown that results from this interference. The ML techniques were analyzed over the GPU benchmark suites, Rodinia, Parboil and SHOC. Our results showed that, from the features selected in the analysis, the number of blocks per grid, number of threads per block, and number of registers are the resource consumption features that most affect the performance of the concurrent execution.



2021 ◽  
Author(s):  
G Dhanabalan ◽  
Tamil Selvi S

Abstract The powerful advantages of programmable logic controller (PLC) dominate the process industries. Scan time of the PLC increases with the number of inputs, rungs added in ladder diagram (LD). Researchers have identified and proved that field programmable gate array (FPGA) is more suitable than PLC for high speed applications. PLC executes the instructions represented through LD. FPGA does not support LD. PLC programmers are not familiar with FPGA programming. This work has developed application software to generate equivalent VerilogHDL code for LD using LabVIEW. Novelty in this work is that each rung is defined using an "assign" statement to ensure concurrent execution of all the rungs. A data acquisition system was created to monitor the digital signals handled by the FPGA. The software was verified with a case study of substances mixing and traffic light control system.



2021 ◽  
Author(s):  
Emil Koutanov

To sidestep reasoning about the complex effects of concurrent execution, many system designers have conveniently embraced strict serializability on the strength of its claims, support from commercial and open-source database communities and ubiquitous levels of industry adoption. Crucially, distributed components are built on this model; multiple schedulers are composed in an event-driven architecture to form larger, ostensibly correct systems. This paper examines the oft-misconstrued position of strict serializability as a composable correctness criterion in the design of such systems. An anomaly is presented wherein a strict serializable scheduler in one system produces a history that cannot be serially applied to even a weak prefix-consistent replica in logical timestamp order. Several solutions are presented under varying isolation properties, including novel isolation properties contributed by this paper. We also distinguish between concurrent schedulers based on their propensity to produce deterministic histories. It is further shown that every nondeterministic scheduler is anomaly-prone, every nonconcurrent scheduler is anomaly-free, and that at least one deterministic concurrent scheduler is anomaly-free.



2021 ◽  
Author(s):  
Emil Koutanov

To sidestep reasoning about the complex effects of concurrent execution, many system designers have conveniently embraced strict serializability on the strength of its claims, support from commercial and open-source database communities and ubiquitous levels of industry adoption. Crucially, distributed components are built on this model; multiple schedulers are composed in an event-driven architecture to form larger, ostensibly correct systems. This paper examines the oft-misconstrued position of strict serializability as a composable correctness criterion in the design of such systems. An anomaly is presented wherein a strict serializable scheduler in one system produces a history that cannot be serially applied to even a weak prefix-consistent replica in logical timestamp order. Several solutions are presented under varying isolation properties, including novel isolation properties contributed by this paper. We also distinguish between concurrent schedulers based on their propensity to produce deterministic histories. It is further shown that every nondeterministic scheduler is anomaly-prone, every nonconcurrent scheduler is anomaly-free, and that at least one deterministic concurrent scheduler is anomaly-free.



2021 ◽  
Vol 5 (3) ◽  
pp. 413-420
Author(s):  
David Kristiadi ◽  
Marwiyati

Quality of experience (QoE) when accessing video streaming becomes a challenge in varieties of network bandwidth/speed. Adaptive streaming becomes an answer to gain good QoE. An architecture system of the adaptive streaming server with Dynamic Adaptive Streaming over HTTP (DASH) was proposed. The system was consists of two services e.g transcoding and streaming. Transcoding service encodes an audio file, multi-bitrates video files, and manifest.mpd files. Streaming service serves client streaming requests that appropriate to client network profiles. The system is built using the Golang programming environment and FFMPEG. Transcoding service has some execution mode (serial and concurrent) and passing mode (1 pass and 2 passes). The transcoding service test results show that concurrent execution is faster 11,5% than the serial execution and transcoding using 1 pass is faster 46,95% than 2 passes but the bitrate of output video lower than the determinate bitrate parameter.  The streaming service has a good QoE. In the 5 scenarios, buffer level=0 happens 5 times, and its total duration is 64 seconds. Buffer level=0 happens when extreme changes happen in network speed from fast to too slow.  



Author(s):  
Rainer Beurskens ◽  
Dennis Brueckner ◽  
Hagen Voigt ◽  
Thomas Muehlbauer

AbstractThe concurrent execution of two or more tasks simultaneously results in performance decrements in one or both conducted tasks. The practice of dual-task (DT) situations has been shown to decrease performance decrements. The purpose of this study was to investigate the effects of consecutive versus concurrent practice on cognitive and motor task performance under single-task (ST) and DT conditions. Forty-five young adults (21 females, 24 males) were randomly assigned to either a consecutive practice (INT consecutive) group, a concurrent practice (INT concurrent) group or a control (CON) group (i.e., no practice). Both INT groups performed 2 days of acquisition, i.e., practicing a cognitive and a motor task either consecutively or concurrently. The cognitive task required participants to perform an auditory stroop task and the number of correct responses was used as outcome measure. In the motor task, participants were asked to stand on a stabilometer and to keep the platform as close to horizontal as possible. The time in balance was calculated for further analysis. Pre- and post-practice testing included performance assessment under ST (i.e., cognitive task only, motor task only) and DT (i.e., cognitive and motor task simultaneously) test conditions. Pre-practice testing revealed no significant group differences under ST and DT test conditions neither for the cognitive nor the motor task measure. During acquisition, both INT groups improved their cognitive and motor task performance. The post-practice testing showed significantly better cognitive and motor task values under ST and DT test conditions for the two INT groups compared to the CON group. Further comparisons between the two INT groups revealed better motor but not cognitive task values in favor of the INT consecutive practice group (ST: p = 0.022; DT: p = 0.002). We conclude that consecutive and concurrent practice resulted in better cognitive (ST condition) and motor (ST and DT test conditions) task performance than no practice. In addition, consecutive practice resulted in superior motor task performance (ST and DT test conditions) compared to concurrent practice and is, therefore, recommended when executing DT practice schedules.



Sign in / Sign up

Export Citation Format

Share Document