Disaster Recovery Services in Intercloud using Genetic Algorithm Load Balancer

Author(s):  
Tamanna Jena ◽  
J.R. Mohanty

Paradigm need to shifts from cloud computing to intercloud for disaster recoveries, which can outbreak anytime and anywhere. Natural disaster treatment includes radically high voluminous impatient job request demanding immediate attention. Under the disequilibrium circumstance, intercloud is more practical and functional option. There are need of protocols like quality of services, service level agreement and disaster recovery pacts to be discussed and clarified during the initial setup to fast track the distress scenario. Orchestration of resources in large scale distributed system having muli-objective optimization of resources, minimum energy consumption, maximum throughput, load balancing, minimum carbon footprint altogether is quite challenging. Intercloud where resources of different clouds are in align, plays crucial role in resource mapping. The objective of this paper is to improvise and fast track the mapping procedures in cloud platform and addressing impatient job requests in balanced and efficient manner. Genetic algorithm based resource allocation is proposed using pareto optimal mapping of resources to keep high utilization rate of processors, high througput and low carbon footprint.  Decision variables include utilization of processors, throughput, locality cost and real time deadline. Simulation results of load balancer using first in first out and genetic algorithm are compared under similar circumstances.

Author(s):  
Tamanna Jena ◽  
J.R. Mohanty

Paradigm need to shifts from cloud computing to intercloud for disaster recoveries, which can outbreak anytime and anywhere. Natural disaster treatment includes radically high voluminous impatient job request demanding immediate attention. Under the disequilibrium circumstance, intercloud is more practical and functional option. There are need of protocols like quality of services, service level agreement and disaster recovery pacts to be discussed and clarified during the initial setup to fast track the distress scenario. Orchestration of resources in large scale distributed system having muli-objective optimization of resources, minimum energy consumption, maximum throughput, load balancing, minimum carbon footprint altogether is quite challenging. Intercloud where resources of different clouds are in align, plays crucial role in resource mapping. The objective of this paper is to improvise and fast track the mapping procedures in cloud platform and addressing impatient job requests in balanced and efficient manner. Genetic algorithm based resource allocation is proposed using pareto optimal mapping of resources to keep high utilization rate of processors, high througput and low carbon footprint.  Decision variables include utilization of processors, throughput, locality cost and real time deadline. Simulation results of load balancer using first in first out and genetic algorithm are compared under similar circumstances.


Forests ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 26
Author(s):  
Yutu Yang ◽  
Zilong Zhuang ◽  
Yabin Yu

Defects on a solid wood board have a great influence on the aesthetics and mechanical properties of the board. After removing the defects, the board is no longer the standard size; manual drawing lines and cutting procedure is time-consuming and laborious; and an optimal solution is not necessarily obtained. Intelligent cutting of the board can be realized using a genetic algorithm. However, the global optimal solution of the whole machining process cannot be obtained by separately considering the sawing and splicing of raw materials. The integrated consideration of wood board cutting and board splicing can improve the utilization rate of the solid wood board. The effective utilization rate of the board with isolated consideration of raw material sawing with standardized dimensions of wood pieces and board splicing is 79.1%, while the shortcut splicing optimization with non-standardized dimensions for the final board has a utilization rate of 88.6% (which improves the utilization rate by 9.5%). In large-scale planning, the use of shortcut splicing optimization also increased the utilization rate by 12.14%. This has certain guiding significance for actual production.


2021 ◽  
pp. 074391562110088
Author(s):  
Luca Panzone ◽  
Alistair Ulph ◽  
Denis Hilton ◽  
Ilse Gortemaker ◽  
Ibrahim Tajudeen

The increase in global temperatures requires substantial reductions in the greenhouse emissions from consumer choices. We use an experimental incentive-compatible online supermarket to analyse the effect of a carbon-based choice architecture, which presents commodities to customers in high, medium and low carbon footprint groups, in reducing the carbon footprints of grocery baskets. We relate this choice architecture to two other policy interventions: a bonus-malus carbon tax on all grocery products; and moral goal priming, using an online banner noting the moral importance of reducing one’s carbon footprint. Participants shopped from their home in an online store containing 612 existing food products and 39 existing non-food products for which we had data on carbon footprint, over three successive weeks, with the interventions occurring in the second and third weeks. Choice architecture reduced carbon footprint significantly in the third week by reducing the proportion of choices made in the high-carbon aisle. The carbon tax reduced carbon footprint in both weeks, primarily by reducing overall spend. The goal priming banner led to a small reduction in carbon footprint in the second week only. Thus, the design of the marketplace plays an important role in achieving the policy objective of reducing greenhouse gas emissions.


Author(s):  
Min Shang ◽  
Ji Luo

The expansion of Xi’an City has caused the consumption of energy and land resources, leading to serious environmental pollution problems. For this purpose, this study was carried out to measure the carbon carrying capacity, net carbon footprint and net carbon footprint pressure index of Xi’an City, and to characterize the carbon sequestration capacity of Xi’an ecosystem, thereby laying a foundation for developing comprehensive and reasonable low-carbon development measures. This study expects to provide a reference for China to develop a low-carbon economy through Tapio decoupling principle. The decoupling relationship between CO2 and driving factors was explored through Tapio decoupling model. The time-series data was used to calculate the carbon footprint. The auto-encoder in deep learning technology was combined with the parallel algorithm in cloud computing. A general multilayer perceptron neural network realized by a parallel BP learning algorithm was proposed based on Map-Reduce on a cloud computing cluster. A partial least squares (PLS) regression model was constructed to analyze driving factors. The results show that in terms of city size, the variable importance in projection (VIP) output of the urbanization rate has a strong inhibitory effect on carbon footprint growth, and the VIP value of permanent population ranks the last; in terms of economic development, the impact of fixed asset investment and added value of the secondary industry on carbon footprint ranks third and fourth. As a result, the marginal effect of carbon footprint is greater than that of economic growth after economic growth reaches a certain stage, revealing that the driving forces and mechanisms can promote the growth of urban space.


Diversity ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 109 ◽  
Author(s):  
Rebecca T. Kimball ◽  
Carl H. Oliveros ◽  
Ning Wang ◽  
Noor D. White ◽  
F. Keith Barker ◽  
...  

It has long been appreciated that analyses of genomic data (e.g., whole genome sequencing or sequence capture) have the potential to reveal the tree of life, but it remains challenging to move from sequence data to a clear understanding of evolutionary history, in part due to the computational challenges of phylogenetic estimation using genome-scale data. Supertree methods solve that challenge because they facilitate a divide-and-conquer approach for large-scale phylogeny inference by integrating smaller subtrees in a computationally efficient manner. Here, we combined information from sequence capture and whole-genome phylogenies using supertree methods. However, the available phylogenomic trees had limited overlap so we used taxon-rich (but not phylogenomic) megaphylogenies to weave them together. This allowed us to construct a phylogenomic supertree, with support values, that included 707 bird species (~7% of avian species diversity). We estimated branch lengths using mitochondrial sequence data and we used these branch lengths to estimate divergence times. Our time-calibrated supertree supports radiation of all three major avian clades (Palaeognathae, Galloanseres, and Neoaves) near the Cretaceous-Paleogene (K-Pg) boundary. The approach we used will permit the continued addition of taxa to this supertree as new phylogenomic data are published, and it could be applied to other taxa as well.


2019 ◽  
Vol 11 (9) ◽  
pp. 2571
Author(s):  
Xujing Zhang ◽  
Lichuan Wang ◽  
Yan Chen

Low-carbon production has become one of the top management objectives for every industry. In garment manufacturing, the material distribution process always generates high carbon emissions. In order to reduce carbon emissions and the number of operators to meet enterprises’ requirements to control the cost of production and protect the environment, the paths of material distribution were analyzed to find the optimal solution. In this paper, the model of material distribution to obtain minimum carbon emissions and vehicles (operators) was established to optimize the multi-target management in three different production lines (multi-line, U-shape two-line, and U-shape three-line), while the workstations were organized in three ways: in the order of processes, in the type of machines, and in the components of garment. The NSGA-II algorithm (non-dominated sorting genetic algorithm-II) was applied to obtain the results of this model. The feasibility of the model and algorithm was verified by the practice of men’s shirts manufacture. It could be found that material distribution of multi-line layout produced the least carbon emissions when the machines were arranged in the group of type.


2020 ◽  
pp. 136943322094719
Author(s):  
Xianrong Qin ◽  
Pengming Zhan ◽  
Chuanqiang Yu ◽  
Qing Zhang ◽  
Yuantao Sun

Optimal sensor placement is an important component of a reliability structural health monitoring system for a large-scale complex structure. However, the current research mainly focuses on optimizing sensor placement problem for structures without any initial sensor layout. In some cases, the experienced engineers will first determine the key position of whole structure must place sensors, that is, initial sensor layout. Moreover, current genetic algorithm or partheno-genetic algorithm will change the position of the initial sensor locations in the iterative process, so it is unadaptable for optimal sensor placement problem based on initial sensor layout. In this article, an optimal sensor placement method based on initial sensor layout using improved partheno-genetic algorithm is proposed. First, some improved genetic operations of partheno-genetic algorithm for sensor placement optimization with initial sensor layout are presented, such as segmented swap, reverse and insert operator to avoid the change of initial sensor locations. Then, the objective function for optimal sensor placement problem is presented based on modal assurance criterion, modal energy criterion, and sensor placement cost. At last, the effectiveness and reliability of the proposed method are validated by a numerical example of a quayside container crane. Furthermore, the sensor placement result with the proposed method is better than that with effective independence method without initial sensor layout and the traditional partheno-genetic algorithm.


Sign in / Sign up

Export Citation Format

Share Document