A Hybrid Simulation Environment to Estimate the Accuracy, Energy Consumption and Processing Time for Image Processing Applications

Author(s):  
Christian Hartmann ◽  
Dietmar Fey
2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Hossein Ahmadvand ◽  
Fouzhan Foroutan ◽  
Mahmood Fathy

AbstractData variety is one of the most important features of Big Data. Data variety is the result of aggregating data from multiple sources and uneven distribution of data. This feature of Big Data causes high variation in the consumption of processing resources such as CPU consumption. This issue has been overlooked in previous works. To overcome the mentioned problem, in the present work, we used Dynamic Voltage and Frequency Scaling (DVFS) to reduce the energy consumption of computation. To this goal, we consider two types of deadlines as our constraint. Before applying the DVFS technique to computer nodes, we estimate the processing time and the frequency needed to meet the deadline. In the evaluation phase, we have used a set of data sets and applications. The experimental results show that our proposed approach surpasses the other scenarios in processing real datasets. Based on the experimental results in this paper, DV-DVFS can achieve up to 15% improvement in energy consumption.


2011 ◽  
Vol 314-316 ◽  
pp. 2071-2075
Author(s):  
Jia Hai Wang ◽  
Wen Tao Gong

Discrete machine manufacture enterprises have to induce new low-carbon manufacturing model in order to solve a dilemma of mutual restraint between development and electric energy consumption. The paper presents an approach to solve JSP with the objective of minimizing the energy consumption by shortening the distance between electricity peak and valley according to theory of load shifting in electricity. The mathematical model is proposed for JSP with objective of minimizing the energy consumption and processing time of entire batch, then the idea of time division is introduced, and a solving method based on GA built-in eM-Plant is employed to verify the model and get satisfactory scheduling results.Discrete machine manufacture enterprises have to induce new low-carbon manufacturing model in order to solve a dilemma of mutual restraint between development and electric energy consumption. The paper presents an approach to solve JSP with the objective of minimizing the energy consumption by shortening the distance between electricity peak and valley according to theory of load shifting in electricity. The mathematical model is proposed for JSP with objective of minimizing the energy consumption and processing time of entire batch, then the idea of time division is introduced, and a solving method based on GA built-in eM-Plant is employed to verify the model and get satisfactory scheduling results.


Author(s):  
Xiao Wu ◽  
Peng Guo ◽  
Yi Wang ◽  
Yakun Wang

AbstractIn this paper, an identical parallel machine scheduling problem with step-deteriorating jobs is considered to minimize the weighted sum of tardiness cost and extra energy consumption cost. In particular, the actual processing time of a job is assumed to be a step function of its starting time and its deteriorating threshold. When the starting time of a job is later than its deteriorating threshold, the job faces two choices: (1) maintaining its status in holding equipment and being processed with a base processing time and (2) consuming an extra penalty time to finish its processing. The two work patterns need different amounts of energy consumption. To implement energy-efficient scheduling, the selection of the pre-processing patterns must be carefully considered. In this paper, a mixed integer linear programming (MILP) model is proposed to minimize the total tardiness cost and the extra energy cost. Decomposition approaches based on logic-based Benders decomposition (LBBD) are developed by reformulating the studied problem into a master problem and some independent sub-problems. The master problem is relaxed by only making assignment decisions. The sub-problems are to find optimal schedules in the job-to-machine assignments given by the master problem. Moreover, MILP and heuristic based on Tabu search are used to solve the sub-problems. To evaluate the performance of our methods, three groups of test instances were generated inspired by both real-world applications and benchmarks from the literature. The computational results demonstrate that the proposed decomposition approaches can compute competitive schedules for medium- and large-size problems in terms of solution quality. In particular, the LBBD with Tabu search performs the best among the suggested four methods.


Author(s):  
Binghai Zhou ◽  
Wenlong Liu

Increasing costs of energy and environmental pollution is prompting scholars to pay close attention to energy-efficient scheduling. This study constructs a multi-objective model for the hybrid flow shop scheduling problem with fuzzy processing time to minimize total weighted delivery penalty and total energy consumption simultaneously. Setup times are considered as sequence-dependent, and in-stage parallel machines are unrelated in this model, meticulously reflecting the actual energy consumption of the system. First, an energy-efficient bi-objective differential evolution algorithm is developed to solve this mixed integer programming model effectively. Then, we utilize an Nawaz-Enscore-Ham-based hybrid method to generate high-quality initial solutions. Neighborhoods are thoroughly exploited with a leader solution challenge mechanism, and global exploration is highly improved with opposition-based learning and a chaotic search strategy. Finally, problems in various scales evaluate the performance of this green scheduling algorithm. Computational experiments illustrate the effectiveness of the algorithm for the proposed model within acceptable computational time.


2010 ◽  
Vol 109 (4) ◽  
pp. 959-965 ◽  
Author(s):  
Tinoy J. Kizhakekuttu ◽  
David D. Gutterman ◽  
Shane A. Phillips ◽  
Jason W. Jurva ◽  
Emily I. L. Arthur ◽  
...  

Recommendations for the measurement of brachial flow-mediated dilation (FMD) typically suggest images be obtained at identical times in the cardiac cycle, usually end diastole (QRS complex onset). This recommendation presumes that inter-individual differences in arterial compliance are minimized. However, published evidence is conflicting. Furthermore, ECG gating is not available on many ultrasound systems; it requires an expensive software upgrade or increased image processing time. We tested whether analysis of images acquired with QRS gating or with the more simplified method of image averaging would yield similar results. We analyzed FMD and nitroglycerin-mediated dilation (NMD) in 29 adults with type 2 diabetes mellitus and in 31 older adults and 12 young adults without diabetes, yielding a range of brachial artery distensibility. FMD and NMD were measured using recommended QRS-gated brachial artery diameter measurements and, alternatively, the average brachial diameters over the entire R-R interval. We found strong agreement between both methods for FMD and NMD (intraclass correlation coefficients = 0.88–0.99). Measuring FMD and NMD using average diameter measurements significantly reduced post-image-processing time (658.9 ± 71.6 vs. 1,024.1 ± 167.6 s for QRS-gated analysis, P < 0.001). FMD and NMD measurements based on average diameter measurements can be performed without reducing accuracy. This finding may allow for simplification of FMD measurement and aid in the development of FMD as a potentially useful clinical tool.


2015 ◽  
Vol 719-720 ◽  
pp. 544-547
Author(s):  
Yu Xue Wang ◽  
Yi Nan Xu ◽  
Yong Hu Ma ◽  
Chang Shou Jin

Distance is usually obtained by ultrasonic wave. This research breaks the rules. Here the information of distance is got by visual range, which has been one of popular fields for many years. In this paper, the advance alarm system uses image processing technology to calculate distance between two vehicles. The system can accurately obtain distance and alarm effectively. The system is implemented by FPGA (Filed Program Gate Array), ARM and image sensor and calculate the distance through visual range technology. The simulation environment is built by HBE-SoC-EXPERT II development platform.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Yui-Liang Chen ◽  
Hong-Hsu Yen

Traditional wireless sensor networks (WSNs) transmit the scalar data (e.g., temperature and irradiation) to the sink node. A new wireless visual sensor network (WVSN) that can transmit images data is a more promising solution than the WSN on sensing, detecting, and monitoring the environment to enhance awareness of the cyber, physical, and social contexts of our daily activities. However, the size of image data is much bigger than the scalar data that makes image transmission a challenging issue in battery-limited WVSN. In this paper, we study the energy efficient image aggregation scheme in WVSN. Image aggregation is a possible way to eliminate the redundant portions of the image captured by different data source nodes. Hence, transmission power could be reduced via the image aggregation scheme. However, image aggregation requires image processing that incurs node processing power. Besides the additional energy consumption from node processing, there is another MAC-aware retransmission energy loss from image aggregation. In this paper, we first propose the mathematical model to capture these three factors (image transmission, image processing, and MAC retransmission) in WVSN. Numerical results based on the mathematical model and real WVSN sensor node (i.e., Meerkats node) are performed to optimize the energy consumption tradeoff between image transmission, image processing, and MAC retransmission.


Sign in / Sign up

Export Citation Format

Share Document