scholarly journals Activity Scheduling on Identical Parallel Processors

The efficiency of parallel processors is achieved for the purpose of quick computing is mainly depending on the scheduling of activity. The most important factor for scheduling of activities are depend on the waiting time which is directly influence the computation time of overall activity. Minimizing the Variance of Waiting Time otherwise known as Waiting Time Variance (WTV) is one of the metrics of Quality of Services (QoS) which enhance the efficiency of activity scheduling. Allocate the activity from a set of activity pool and schedule them for each identical parallel processor for execution in a large scale by minimizing WTV is the main focusing area of this paper. In case of large scale computing activities are complex in nature. A prior knowledge of each activity must be known before the preparation of activity scheduling for efficient and rapid computing. A snake walks style of activity distribution among the parallel processor is presented in this paper for minimization problem. The minimization of WTV is measured with the help of three heuristic intend methods named as RSS, VS and BS. The results of the experiment are compared with current conspires and demonstrate the new snake style conspire is presenting the best practices for proven conspires and challenges in a wide range of activity. The algorithm's predictable findings appear as illustrated with graph.


2020 ◽  
Vol 36 (10) ◽  
pp. 3011-3017 ◽  
Author(s):  
Olga Mineeva ◽  
Mateo Rojas-Carulla ◽  
Ruth E Ley ◽  
Bernhard Schölkopf ◽  
Nicholas D Youngblut

Abstract Motivation Methodological advances in metagenome assembly are rapidly increasing in the number of published metagenome assemblies. However, identifying misassemblies is challenging due to a lack of closely related reference genomes that can act as pseudo ground truth. Existing reference-free methods are no longer maintained, can make strong assumptions that may not hold across a diversity of research projects, and have not been validated on large-scale metagenome assemblies. Results We present DeepMAsED, a deep learning approach for identifying misassembled contigs without the need for reference genomes. Moreover, we provide an in silico pipeline for generating large-scale, realistic metagenome assemblies for comprehensive model training and testing. DeepMAsED accuracy substantially exceeds the state-of-the-art when applied to large and complex metagenome assemblies. Our model estimates a 1% contig misassembly rate in two recent large-scale metagenome assembly publications. Conclusions DeepMAsED accurately identifies misassemblies in metagenome-assembled contigs from a broad diversity of bacteria and archaea without the need for reference genomes or strong modeling assumptions. Running DeepMAsED is straight-forward, as well as is model re-training with our dataset generation pipeline. Therefore, DeepMAsED is a flexible misassembly classifier that can be applied to a wide range of metagenome assembly projects. Availability and implementation DeepMAsED is available from GitHub at https://github.com/leylabmpi/DeepMAsED. Supplementary information Supplementary data are available at Bioinformatics online.



2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Samira Melki ◽  
Moncef Gueddari

The production of phosphoric acid by the Tunisian Chemical Group, in Sfax, Tunisia, led to the degradation of the groundwater quality of the Sfax-Agareb aquifer mainly by the phosphogypsum leachates infiltration. Spatiotemporal monitoring of the quality of groundwater was carried out by performing bimonthly sampling between October 2013 and October 2014. Samples culled in the current study were subject to physicochemical parameters measurements and analysis of the major elements, orthophosphates, fluorine, trace metals, and stable isotopes (18O, 2H). The obtained results show that the phosphogypsum leachates infiltration has a major effect on the downstream part of the aquifer, where the highest values of conductivity, SO42-, Ortho-P, and F-, and the lowest pH were recorded. In addition, these results indicated that phosphogypsum leachates contained much higher amount of Cr, Cd, Zn, Cu, Fe, and Al compared to the groundwater. Spatiotemporal variation of the conductivity and concentrations of major elements is linked to the phosphogypsum leachates infiltration as well as to a wide range of factors such as the natural conditions of feeding and the water residence time. Contents of O18 and 2H showed that the water of the Sfax-Agareb aquifer undergoes a large scale evaporation process originated from recent rainfall.



2018 ◽  
Vol 175 ◽  
pp. 03001
Author(s):  
Han Yang ◽  
Chen Kerui ◽  
Li Yang ◽  
Qu Bao

In twenty-first Century, China vigorously promoted the research and construction of AC and DC transmission technology in order to ensure the optimal allocation of energy resources in a large scale[1]. In the construction of AC UHV transmission line, the welding quality of tower and stiffening plate as the load bearing tower and the tension of the welded structure plays an important role in the overall quality of the steel structure. In the past, the welding process of semi automatic carbon dioxide solid core welding wire often has the characteristics of weld spatter not easy to clean up and low efficiency of welding. The semi-automatic CO2 flux cored arc welding, has the characteristics of current and voltage to adapt to a wide range, melting speed, has important significance for improving the process, this paper describes the technology in practical engineering applications, and developed the basic strategy of training for grid steel structure welding technicians. This paper also lists both V groove plate butt FCAW welding typical welding project, hope this welding process will continue to spread.



Author(s):  
С.И. Носков ◽  
М.П. Базилевский ◽  
Ю.А. Трофимов ◽  
А. Буяннэмэх

В статье рассматривается проблема разработки (формирования) функции эффективности (агрегированного критерия, свертки критериев) входящих в состав Улан-Баторской железной дороги (УБЖД) участков, которая содержала бы специальным образом взвешенные частные характеристики качества функционирования этих участков. Решение этой проблемы осуществляется на основе разработанной в Иркутском государственном университете путей сообщения информационно-вычислительной технологии (ИВТ) многокритериального оценивания эффективности функционирования сложных социально-экономических и технических систем. ИВТ позволяет на модельном уровне оценивать эту эффективность одним числом (выраженным, например, в процентах), что открывает широкие возможности в управлении этими системами, поскольку позволяет выполнять, в частности, масштабный многофакторный сравнительный анализ деятельности однородных организационных и других структур и принимать на этой основе решения самого различного характера. Построена функция эффективности функционирования участков УБЖД, включающая в свой состав взвешенные частные индикаторы такой эффективности: погрузка, статическая нагрузка, выгрузка, отправление вагонов, перевозка пассажиров, простои вагонов с одной переработкой, простои местных вагонов, простои транзитных вагонов с переработкой, простои транзитных вагонов без переработки. На основе этой функции рассчитана масштабированная на сто процентов эффективность каждого участка. При этом все показатели предпочтения упорядочены по убыванию значимости. Подобная информация, формируемая с годичной периодичностью, может быть весьма полезна руководству УБЖД для принятия широкого спектра управленческих, в том числе кадровых, решений. Аналогичная работа может быть выполнена в интересах РАО РЖД. The article discusses the problem of developing (forming) an efficiency function (aggregated criterion, convolution of criteria) of the sections included in the Ulan Bator Railway (UBZhD), which would contain specially weighted private characteristics of the quality of the functioning of these sections. The solution to this problem is carried out on the basis of the information and computational technology (ICT) developed at the Irkutsk State University of communication lines for multi-criteria assessment of the effectiveness of the functioning of complex socio-economic and technical systems. IWT makes it possible at the model level to evaluate this efficiency by one number (expressed, for example, as a percentage), which opens up ample opportunities in the management of these systems, since it allows performing, in particular, a large-scale multifactorial comparative analysis of the activities of homogeneous organizational and other structures and on this basis solutions of the most varied nature. The function of the effectiveness of the functioning of the UBZhD sections has been built, which includes weighted private indicators of such efficiency: loading, static load, unloading, dispatch of cars, transportation of passengers, idle time of cars with one processing, idle time of local cars, idle time of transit cars with processing, idle time of transit cars without processing. Based on this function, a 100% scaled efficiency is calculated for each site. Moreover, all preference indicators are sorted in descending order of importance. Such information, generated on a yearly basis, can be very useful to the UBZhD leadership for making a wide range of managerial, including personnel, decisions. Similar work can be performed in the interests of RAO Russian Railways.



Author(s):  
Osvaldo Adilson De Carvalho Junior ◽  
Sarita Mazzini Bruschi ◽  
Regina Helena Carlucci Santana ◽  
Marcos José Santana

The aim of this paper is to propose and evaluate GreenMACC (Green Metascheduler Architecture to Provide QoS in Cloud Computing), an extension of the MACC architecture (Metascheduler Architecture to provide QoS in Cloud Computing) which uses greenIT techniques to provide Quality of Service. The paper provides an evaluation of the performance of the policies in the four stages of scheduling focused on energy consumption and average response time. The results presented confirm the consistency of the proposal as it controls energy consumption and the quality of services requested by different users of a large-scale private cloud.



Author(s):  
Donovan H Parks ◽  
Michael Imelfort ◽  
Connor T Skennerton ◽  
Philip Hugenholtz ◽  
Gene W Tyson

Large-scale recovery of genomes from isolates, single cells, and metagenomic data has been made possible by advances in computational methods and substantial reductions in sequencing costs. While this increasing breadth of draft genomes is providing key information regarding the evolutionary and functional diversity of microbial life, it has become impractical to finish all available reference genomes. Making robust biological inferences from draft genomes requires accurate estimates of their completeness and contamination. Current methods for assessing genome quality are ad hoc and generally make use of a limited number of ‘marker’ genes conserved across all bacterial or archaeal genomes. Here we introduce CheckM, an automated method for assessing the quality of a genome using a broader set of marker genes specific to the position of a genome within a reference genome tree and information about the collocation of these genes. We demonstrate the effectiveness of CheckM using synthetic data and a wide range of isolate, single cell and metagenome derived genomes. CheckM is shown to provide accurate estimates of genome completeness and contamination, and to outperform existing approaches. Using CheckM, we identify a diverse range of errors currently impacting publicly available isolate genomes and demonstrate that genomes obtained from single cells and metagenomic data vary substantially in quality. In order to facilitate the use of draft genomes, we propose an objective measure of genome quality that can be used to select genomes suitable for specific gene- and genome-centric analyses of microbial communities.



2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Xiaoying Wang ◽  
Xiaojing Liu ◽  
Lihua Fan ◽  
Xuhan Jia

As cloud computing offers services to lots of users worldwide, pervasive applications from customers are hosted by large-scale data centers. Upon such platforms, virtualization technology is employed to multiplex the underlying physical resources. Since the incoming loads of different application vary significantly, it is important and critical to manage the placement and resource allocation schemes of the virtual machines (VMs) in order to guarantee the quality of services. In this paper, we propose a decentralized virtual machine migration approach inside the data centers for cloud computing environments. The system models and power models are defined and described first. Then, we present the key steps of the decentralized mechanism, including the establishment of load vectors, load information collection, VM selection, and destination determination. A two-threshold decentralized migration algorithm is implemented to further save the energy consumption as well as keeping the quality of services. By examining the effect of our approach by performance evaluation experiments, the thresholds and other factors are analyzed and discussed. The results illustrate that the proposed approach can efficiently balance the loads across different physical nodes and also can lead to less power consumption of the entire system holistically.



2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Paul Wilson ◽  
Jenny Billings ◽  
Julie MacInnes ◽  
Rasa Mikelyte ◽  
Elizabeth Welch ◽  
...  

Abstract Background With innovation in service delivery increasingly viewed as crucial to the long-term sustainability of health systems, NHS England launched an ambitious new model of care (Vanguard) programme in 2015. Supported by a £350 million transformation fund, 50 Vanguard sites were to act as pilots for innovation in service delivery, to move quickly to change the way that services were delivered, breaking down barriers between sectors and improving the coordination and delivery of care. Methods As part of a national evaluation of the Vanguard programme, we conducted an evidence synthesis to assess the nature and quality of locally commissioned evaluations. With access to a secure, online hub used by the Vanguard and other integrated care initiatives, two researchers retrieved any documents from a locally commissioned evaluation for inclusion. All identified documents were downloaded and logged, and details of the evaluators, questions, methodological approaches and limitations in design and/or reporting were extracted. As included evaluations varied in nature and type, a narrative synthesis was undertaken. Results We identified a total of 115 separate reports relating to the locally commissioned evaluations. Five prominent issues relating to evaluation conduct were identified across included reports: use of logic models, number and type of evaluation questions posed, data sharing and information governance, methodological challenges and evaluation reporting in general. A combination of resource, data and time constraints means that evaluations often attempted to but did not fully address the wide range of questions posed by individual Vanguards. Conclusions Significant investment was made in independent local evaluations of the Vanguard programme by NHS England. This synthesis represents the only comprehensive attempt to capture methodological learning and may serve as a key resource for researchers and policy-makers seeking to understand investigating large-scale system change, both within the NHS and internationally. PROSPERO (Registration number: CRD42017069282).



2015 ◽  
Vol 5 (3) ◽  
pp. 795-800 ◽  
Author(s):  
S. F. Issawi ◽  
A. Al Halees ◽  
M. Radi

Cloud computing is a recent, emerging technology in the IT industry. It is an evolution of previous models such as grid computing. It enables a wide range of users to access a large sharing pool of resources over the internet. In such complex system, there is a tremendous need for an efficient load balancing scheme in order to satisfy peak user demands and provide high quality of services. One of the challenging problems that degrade the performance of a load balancing process is bursty workloads. Although there are a lot of researches proposing different load balancing algorithms, most of them neglect the problem of bursty workloads. Motivated by this problem, this paper proposes a new burstness-aware load balancing algorithm which can adapt to the variation in the request rate by adopting two load balancing algorithms: RR in burst and Random in non-burst state. Fuzzy logic is used in order to assign the received request to a balanced VM. The algorithm has been evaluated and compared with other algorithms using Cloud Analyst simulator.  Results show that the proposed algorithm improves the average response time and average processing time in comparison with other algorithms.



Author(s):  
Putri Amelia ◽  
Artya Lathifah ◽  
Muhammad Dliya'ul Haq ◽  
Christoph Lorenz Reimann ◽  
Yudi Setiawan

Background: To remain relevant in the customer-oriented market, hospitals must pay attention to the quality of services and meet customers' expectations from admission to discharge stage. For an outpatient customer, pharmacy is the last unit visited before discharge. It is likely to influence patient satisfaction and reflect the quality of hospital's service. However, at certain hospitals, the waiting time is long. Resources need to be deployed strategically to reduce queue time. Objective: This research aims to arrange the number of staff (pharmacists and workers) in each station in the pharmacy outpatient service to minimise the queue time.Methods: A discrete simulation method is used to observe the waiting time spent at the pharmacy. The simulation run is valid and effective to test the scenario. Results: It is recommended to add more personnel for the non-compounding medicine and packaging to reduce the waiting time by 22.41%Conclusion: By adding personnel to non-compounding and packaging stations, the system performance could be improved. Cost-effectiveness analysis should be done to corroborate the finding. Keywords: Discrete Event Simulation, Hospital, Outpatient Service, Pharmacy Unit, System AnalysisBackground: To remain relevant in the customer-oriented market, hospitals must pay attention to the quality of services and meet customers' expectations from admission to discharge stage. For an outpatient customer, pharmacy is the last unit visited before discharge. It is likely to influence patient satisfaction and reflect the quality of hospital's service. However, at certain hospitals, the waiting time is long. Resources need to be deployed strategically to reduce queue time. Objective: This research aims to arrange the number of staff (pharmacists and workers) in each station in the pharmacy outpatient service to minimise the queue time.Methods: A discrete simulation method is used to observe the waiting time spent at the pharmacy. The simulation run is valid and effective to test the scenario. Results: It is recommended to add more personnel for the non-compounding medicine and packaging to reduce the waiting time by 22.41%Conclusion: By adding personnel to non-compounding and packaging stations, the system performance could be improved. Cost-effectiveness analysis should be done to corroborate the finding. Keywords:Discrete Event Simulation, Hospital, Outpatient Service, Pharmacy Unit, System Analysis



Sign in / Sign up

Export Citation Format

Share Document