performance modelling
Recently Published Documents


TOTAL DOCUMENTS

683
(FIVE YEARS 128)

H-INDEX

25
(FIVE YEARS 5)

2022 ◽  
Author(s):  
Sunita Saini ◽  
Davinder Singh Saini

Abstract Fundamental charge vector method analysis is a single parameter optimization technique limited to conduction loss assuming all frequency-dependent switching (parasitic) loss negligible. This paper investigates a generalized structure to design DC-DC SC converters based on conduction and switching loss. A new technique is proposed to find the optimum value of switching frequency and switch size to calculate target load current and output voltage that maximize the efficiency. The analysis is done to identify switching frequency and switch size for two-phase 2:1 series-parallel SC converter for a target load current of 2.67mA implemented on a 22nm technology node. Results show that a minimum of 250MHz switching frequency is required for target efficiency more than 90% and the output voltage greater than 0.85V where the switch size of a unit cell corresponds to 10Ω on-resistance. MATLAB and PSpice simulation tools are used for results and validation.


2021 ◽  
Author(s):  
◽  
Hassan Tariq

<p>There is a huge and rapidly increasing amount of data being generated by social media, mobile applications and sensing devices. Big data is the term usually used to describe such data and is described in terms of the 3Vs - volume, variety and velocity. In order to process and mine such a massive amount of data, several approaches and platforms have been developed such as Hadoop. Hadoop is a popular open source distributed and parallel computing framework. It has a large number of configurable parameters which can be set before the execution of jobs to optimize the resource utilization and execution time of the clusters. These parameters have a significant impact on system resources and execution time. Optimizing the performance of a Hadoop cluster by tuning such a large number of parameters is a tedious task. Most current big data modeling approaches do not include the complex interaction between configuration parameters and the cluster environment changes such as use of different datasets or types of query. This makes it difficult to predict for example the execution time of a job or resource utilization of a cluster. Other attributes include configuration parameters, the structure of query, the dataset, number of nodes and the infrastructure used.  Our first main objective was to design reliable experiments to understand the relationship between attributes. Before designing and implementing the actual experiment we applied Hazard and Operability (HAZOP) analysis to identify operational hazards. These hazards can affect normal working of cluster and execution of Hadoop jobs. This brainstorming activity improved the design and implementation of our experiments by improving the internal validity of the experiments. It also helped us to identify the considerations that must be taken into account for reliable results. After implementing our design, we characterized the relationship between different Hadoop configuration parameters, network and system performance measures.   Our second main objective was to investigate the use of machine learning to model and predict the resource utilization and execution time of Hadoop jobs. Resource utilization and execution time of Hadoop jobs are affected by different attributes such as configuration parameters and structure of query. In order to estimate or predict either qualitatively or quantitatively the level of resource utilization and execution time, it is important to understand the impact of different combinations of these Hadoop job attributes. You could conduct experiments with many different combinations of parameters to uncover this but it is very difficult to run such a large number of jobs with different combinations of Hadoop job attributes and then interpret the data manually. It is very difficult to extract patterns from the data and give a model that can generalize for an unseen scenario. In order to automate the process of data extraction and modeling the complex behavior of different attributes of Hadoop job machine learning was used. Our decision tree based approach enabled us to systematically discover significant patterns in data. Our results showed that the decision tree models constructed for different resources and execution time were informative and robust. They were able to generalize over a wide range of minor and major environmental changes such as change in dataset, cluster size and infrastructure such as Amazon EC2. Moreover, the use of different correlation and regression techniques, such as M5P, Pearson's correlation and k-means clustering, confirmed our findings and provided further insight into the relationship of different attributes and with each other. M5P is a classification and regression technique that predicted the functional relationships among different job attributes. The use of k-means clustering allowed us to see the experimental runs that shows similar resource utilization and execution time. Statistical significance tests, were used to validate the significance of changes in results of different experimental runs, also showed the effectiveness of our resource and performance modelling and prediction method.</p>


2021 ◽  
Author(s):  
◽  
Hassan Tariq

<p>There is a huge and rapidly increasing amount of data being generated by social media, mobile applications and sensing devices. Big data is the term usually used to describe such data and is described in terms of the 3Vs - volume, variety and velocity. In order to process and mine such a massive amount of data, several approaches and platforms have been developed such as Hadoop. Hadoop is a popular open source distributed and parallel computing framework. It has a large number of configurable parameters which can be set before the execution of jobs to optimize the resource utilization and execution time of the clusters. These parameters have a significant impact on system resources and execution time. Optimizing the performance of a Hadoop cluster by tuning such a large number of parameters is a tedious task. Most current big data modeling approaches do not include the complex interaction between configuration parameters and the cluster environment changes such as use of different datasets or types of query. This makes it difficult to predict for example the execution time of a job or resource utilization of a cluster. Other attributes include configuration parameters, the structure of query, the dataset, number of nodes and the infrastructure used.  Our first main objective was to design reliable experiments to understand the relationship between attributes. Before designing and implementing the actual experiment we applied Hazard and Operability (HAZOP) analysis to identify operational hazards. These hazards can affect normal working of cluster and execution of Hadoop jobs. This brainstorming activity improved the design and implementation of our experiments by improving the internal validity of the experiments. It also helped us to identify the considerations that must be taken into account for reliable results. After implementing our design, we characterized the relationship between different Hadoop configuration parameters, network and system performance measures.   Our second main objective was to investigate the use of machine learning to model and predict the resource utilization and execution time of Hadoop jobs. Resource utilization and execution time of Hadoop jobs are affected by different attributes such as configuration parameters and structure of query. In order to estimate or predict either qualitatively or quantitatively the level of resource utilization and execution time, it is important to understand the impact of different combinations of these Hadoop job attributes. You could conduct experiments with many different combinations of parameters to uncover this but it is very difficult to run such a large number of jobs with different combinations of Hadoop job attributes and then interpret the data manually. It is very difficult to extract patterns from the data and give a model that can generalize for an unseen scenario. In order to automate the process of data extraction and modeling the complex behavior of different attributes of Hadoop job machine learning was used. Our decision tree based approach enabled us to systematically discover significant patterns in data. Our results showed that the decision tree models constructed for different resources and execution time were informative and robust. They were able to generalize over a wide range of minor and major environmental changes such as change in dataset, cluster size and infrastructure such as Amazon EC2. Moreover, the use of different correlation and regression techniques, such as M5P, Pearson's correlation and k-means clustering, confirmed our findings and provided further insight into the relationship of different attributes and with each other. M5P is a classification and regression technique that predicted the functional relationships among different job attributes. The use of k-means clustering allowed us to see the experimental runs that shows similar resource utilization and execution time. Statistical significance tests, were used to validate the significance of changes in results of different experimental runs, also showed the effectiveness of our resource and performance modelling and prediction method.</p>


2021 ◽  
Author(s):  
Sheriffo Ceesay ◽  
Yuhui Lin ◽  
Adam Barker

2021 ◽  
pp. 131343
Author(s):  
Martin Jendrlin ◽  
Aleksandar Radu ◽  
Vladimir Zholobenko ◽  
Dmitry Kirsanov

2021 ◽  
Author(s):  
◽  
Jordan Ansell

<p>Analytical modelling and experimental measurement can are used to evaluate the performance of a network. Models provide insight and measurement provides realism.  For software defined networks (SDN) it is unknown how well the existing queueing models represent the performance of a real SDN network. This leads to uncertainty between what can be predicted and the actual behaviour of a software defined network.  This work investigates the accuracy of software defined network queueing models. This is done through comparing the performance results of analytical models to experimental performance results.  The outcome of this is an understanding of how reliable the existing queueing models are and areas where the queueing models can be improved.</p>


2021 ◽  
Author(s):  
◽  
Jordan Ansell

<p>Analytical modelling and experimental measurement can are used to evaluate the performance of a network. Models provide insight and measurement provides realism.  For software defined networks (SDN) it is unknown how well the existing queueing models represent the performance of a real SDN network. This leads to uncertainty between what can be predicted and the actual behaviour of a software defined network.  This work investigates the accuracy of software defined network queueing models. This is done through comparing the performance results of analytical models to experimental performance results.  The outcome of this is an understanding of how reliable the existing queueing models are and areas where the queueing models can be improved.</p>


2021 ◽  
Vol 2090 (1) ◽  
pp. 012101
Author(s):  
D Alfonso-Corcuera ◽  
S. Pindado ◽  
M Ogueta-Gutiérrez ◽  
A Sanz-Andrés

Abstract In the present work, the effect of the friction forces at bearings on cup anemometer performance is studied. The study is based on the classical analytical approach to cup anemometer performance (2-cup model), used in the analysis by Schrenk (1929) and Wyngaard (1981). The friction torque dependence on temperature was modelled using exponential functions fitted to the experimental results from RISØ report #1348 by Pedersen (2003). Results indicate a logical poorer performance (in terms of a lower rotation speed at the same wind velocity), with an increase of the friction. However, this decrease of the performance is affected by the aerodynamic characteristics of the cups. More precisely, results indicate that the effect of the friction is modified depending on the ratio between the maximum value of the aerodynamic drag coefficient (at 0° yaw angle) and the minimum one (at 180° yaw angle). This reveals as a possible way to increase the efficiency of the cup anemometer rotors. Besides, if the friction torque is included in the equations, a noticeable deviation of the rotation rate (0.5-1% with regard to the expected rotation rate without considering friction) is found for low temperatures.


Sign in / Sign up

Export Citation Format

Share Document