power budget
Recently Published Documents


TOTAL DOCUMENTS

404
(FIVE YEARS 118)

H-INDEX

15
(FIVE YEARS 3)

Abstract While water lifting plays a recognized role in the global atmospheric power budget, estimates for this role in tropical cyclones vary from no effect to a major reduction in storm intensity. To better assess this impact, here we consider the work output of an infinitely narrow thermodynamic cycle with two streamlines connecting the top of the boundary layer in the vicinity of maximum wind (without assuming gradient-wind balance) to an arbitrary level in the inviscid free troposphere. The reduction of a storm’s maximum wind speed due to water lifting is found to decline with increasing efficiency of the cycle and is about 5% for maximum observed Carnot efficiencies. In the steady-state cycle, there is an extra heat input associated with the warming of precipitating water. The corresponding positive extra work is of an opposite sign and several times smaller than that due to water lifting. We also estimate the gain of kinetic energy in the outflow region. Contrary to previous assessments, this term is found to be large when the outflow radius is small (comparable to the radius of maximum wind). Using our framework, we show that Emanuel’s maximum potential intensity (E-PI) corresponds to a cycle where total work equals work performed at the top of the boundary layer (net work in the free troposphere is zero). This constrains a dependence between the outflow temperature and heat input at the point of maximum wind, but does not constrain the radial pressure gradient. We outline the implications of the established patterns for assessing real storms.


2021 ◽  
Author(s):  
Akram Hadeed

Recently, technology scaling has enabled the placement of an increasing number of cores, in the form of chip-multiprocessors (CMPs) on a chip and continually shrinking transistor sizes to improve performance. In this context, power consumption has become the main constraint in designing CMPs. As a result, uncore components power consumption taking increasing portion from the on-chip power budget; therefore, designing power management techniques, particularly memory and network-on-chip (NoC) systems, has become an important issue to solve. Consequently, a considerable attention has been directed toward power management based on CMPs components, particularly shared caches and uncore interconnected structures, to overcome the challenges of limited chip power budget.<div>This work targets to design an energy-efficient uncore architecture by using heterogeneity in components (cache cells) and operational parameters (Voltage/Frequency). In order to ensure the minimum impact on the system performance, a run-time approach is investigated to assess the proposed method. An architecture is proposed where the cache layer contains the heterogenous cache banks in all placed in one frequency voltage domain. Average memory access time (AMAT) was selected as a network monitor to monitor the performance on the run-time. The appropriate size and type of the last level cache (LLC) and Voltage/Frequency for the uncore domain is adjusted according to the calculated AMAT which indicates the system demand from the uncore.<br></div><div>The proposed hybrid architecture was implemented, investigated and compared with the a baseline model where only SRAM banks were used in the last level cache. Experimental results on the Princeton Application Repository for Shared-Memory Computers (PARSEC) benchmark suit,show that the proposed architecture yields up to a 40% reduction in overall chip energy-delay product with a marginal performance degradation in average of -1.2% below the baseline one. The best energy saving was 55% and the worse degradation was only 15%.<br></div>


2021 ◽  
Author(s):  
Akram Hadeed

Recently, technology scaling has enabled the placement of an increasing number of cores, in the form of chip-multiprocessors (CMPs) on a chip and continually shrinking transistor sizes to improve performance. In this context, power consumption has become the main constraint in designing CMPs. As a result, uncore components power consumption taking increasing portion from the on-chip power budget; therefore, designing power management techniques, particularly memory and network-on-chip (NoC) systems, has become an important issue to solve. Consequently, a considerable attention has been directed toward power management based on CMPs components, particularly shared caches and uncore interconnected structures, to overcome the challenges of limited chip power budget.<div>This work targets to design an energy-efficient uncore architecture by using heterogeneity in components (cache cells) and operational parameters (Voltage/Frequency). In order to ensure the minimum impact on the system performance, a run-time approach is investigated to assess the proposed method. An architecture is proposed where the cache layer contains the heterogenous cache banks in all placed in one frequency voltage domain. Average memory access time (AMAT) was selected as a network monitor to monitor the performance on the run-time. The appropriate size and type of the last level cache (LLC) and Voltage/Frequency for the uncore domain is adjusted according to the calculated AMAT which indicates the system demand from the uncore.<br></div><div>The proposed hybrid architecture was implemented, investigated and compared with the a baseline model where only SRAM banks were used in the last level cache. Experimental results on the Princeton Application Repository for Shared-Memory Computers (PARSEC) benchmark suit,show that the proposed architecture yields up to a 40% reduction in overall chip energy-delay product with a marginal performance degradation in average of -1.2% below the baseline one. The best energy saving was 55% and the worse degradation was only 15%.<br></div>


Author(s):  
Devina Cristine Marubin ◽  
◽  
Sim Sy Yi ◽  

Can-Sized satellite (canSAT) is a small satellite that is used for educational purpose. CanSAT offer student to build their satellites with their creativity which make the learning process more effective. In Malaysia, SiswaSAT is held by the Malaysia Space Agency for students in different categories to participate and build their satellites according to rules set and it should be a low-cost project. CanSAT can be divided into few parts which are communication system, onboard data acquisition, ground control station and power system. The power system is one of the important and heaviest subsystems, it needed to supply power, but weight and size are one of the main concerned as the canSAT should not exceed the required weight and selecting power supply that is matched with the overall power budget that has small size and lightweight is challenging. Therefore, the power supply selection should consider this detail. The power distribution design should be able to supply an appropriate amount of current and voltage to the components according to their specification. This study aims to develop and test the proposed prototype which is named ScoreSAT able to provide data and have enough power supply for the whole operation. Therefore, an initiative to develop the appropriate power distribution design for canSAT is taken to overcome the problem of the power system. Moreover, each subsystem needs to be tested by obtaining the results from the onboard data acquisition and transmit the data using the communication system before integrating into the power system. ScoreSAT prototype needs to carry the system that is mounted inside, thus the space inside the prototype needs to be fully utilized for the whole system to fit in. ScoreSAT completes the mission by obtaining data acquisition during the operation.


AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 705-719
Author(s):  
Qian Huang ◽  
Chenghung Hsieh ◽  
Jiaen Hsieh ◽  
Chunchen Liu

Artificial intelligence (AI) is fundamentally transforming smart buildings by increasing energy efficiency and operational productivity, improving life experience, and providing better healthcare services. Sudden Infant Death Syndrome (SIDS) is an unexpected and unexplained death of infants under one year old. Previous research reports that sleeping on the back can significantly reduce the risk of SIDS. Existing sensor-based wearable or touchable monitors have serious drawbacks such as inconvenience and false alarm, so they are not attractive in monitoring infant sleeping postures. Several recent studies use a camera, portable electronics, and AI algorithm to monitor the sleep postures of infants. However, there are two major bottlenecks that prevent AI from detecting potential baby sleeping hazards in smart buildings. In order to overcome these bottlenecks, in this work, we create a complete dataset containing 10,240 day and night vision samples, and use post-training weight quantization to solve the huge memory demand problem. Experimental results verify the effectiveness and benefits of our proposed idea. Compared with the state-of-the-art AI algorithms in the literature, the proposed method reduces memory footprint by at least 89%, while achieving a similar high detection accuracy of about 90%. Our proposed AI algorithm only requires 6.4 MB of memory space, while other existing AI algorithms for sleep posture detection require 58.2 MB to 275 MB of memory space. This comparison shows that the memory is reduced by at least 9 times without sacrificing the detection accuracy. Therefore, our proposed memory-efficient AI algorithm has great potential to be deployed and to run on edge devices, such as micro-controllers and Raspberry Pi, which have low memory footprint, limited power budget, and constrained computing resources.


2021 ◽  
Vol 136 (12) ◽  
Author(s):  
Richard Brenner ◽  
Christos Leonidopoulos

AbstractThe operation at the Z-pole of the FCC-ee machine will deliver the highest possible instantaneous luminosities with the goal of collecting the largest Z boson datasets (Tera-Z), and enable a programme of standard model physics studies with unprecedented precision. The data acquisition and trigger systems of the FCC-ee experiments must be designed to be as unbiased and robust as possible, with the goal of containing the systematic uncertainties associated with these datasets at the smallest possible level, in order to not compromise the extremely small statistical uncertainties. In designing these experiments, we are confronted by questions on detector read-out speeds with an extremely tight material and power budget, trigger systems with a first hardware level or implemented exclusively on software, impact of background sources on event sizes, ultimate precision luminosity monitoring (to the $$10^{-5}$$ 10 - 5 –$$10^{-4}$$ 10 - 4 level) and sensitivity to a broad range of non-conventional exotic signatures, such as long-lived non-relativistic particles. We will review the various challenges on online selection for the most demanding Tera-Z running scenario and the constraints they pose on the design of FCC-ee detectors.


2021 ◽  
Vol 20 (6) ◽  
pp. 1-24
Author(s):  
Jason Servais ◽  
Ehsan Atoofian

In recent years, Deep Neural Networks (DNNs) have been deployed into a diverse set of applications from voice recognition to scene generation mostly due to their high-accuracy. DNNs are known to be computationally intensive applications, requiring a significant power budget. There have been a large number of investigations into energy-efficiency of DNNs. However, most of them primarily focused on inference while training of DNNs has received little attention. This work proposes an adaptive technique to identify and avoid redundant computations during the training of DNNs. Elements of activations exhibit a high degree of similarity, causing inputs and outputs of layers of neural networks to perform redundant computations. Based on this observation, we propose Adaptive Computation Reuse for Tensor Cores (ACRTC) where results of previous arithmetic operations are used to avoid redundant computations. ACRTC is an architectural technique, which enables accelerators to take advantage of similarity in input operands and speedup the training process while also increasing energy-efficiency. ACRTC dynamically adjusts the strength of computation reuse based on the tolerance of precision relaxation in different training phases. Over a wide range of neural network topologies, ACRTC accelerates training by 33% and saves energy by 32% with negligible impact on accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7855
Author(s):  
Amr Amrallah ◽  
Ehab Mahmoud Mohamed ◽  
Gia Khanh Tran ◽  
Kei Sakaguchi

Modern wireless networks are notorious for being very dense, uncoordinated, and selfish, especially with greedy user needs. This leads to a critical scarcity problem in spectrum resources. The Dynamic Spectrum Access system (DSA) is considered a promising solution for this scarcity problem. With the aid of Unmanned Aerial Vehicles (UAVs), a post-disaster surveillance system is implemented using Cognitive Radio Network (CRN). UAVs are distributed in the disaster area to capture live images of the damaged area and send them to the disaster management center. CRN enables UAVs to utilize a portion of the spectrum of the Electronic Toll Collection (ETC) gates operating in the same area. In this paper, a joint transmission power selection, data-rate maximization, and interference mitigation problem is addressed. Considering all these conflicting parameters, this problem is investigated as a budget-constrained multi-player multi-armed bandit (MAB) problem. The whole process is done in a decentralized manner, where no information is exchanged between UAVs. To achieve this, two power-budget-aware PBA-MAB) algorithms, namely upper confidence bound (PBA-UCB (MAB) algorithm and Thompson sampling (PBA-TS) algorithm, were proposed to realize the selection of the transmission power value efficiently. The proposed PBA-MAB algorithms show outstanding performance over random power value selection in terms of achievable data rate.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-22
Author(s):  
Biswadip Maity ◽  
Saehanseul Yi ◽  
Dongjoo Seo ◽  
Leming Cheng ◽  
Sung-Soo Lim ◽  
...  

Self-driving systems execute an ensemble of different self-driving workloads on embedded systems in an end-to-end manner, subject to functional and performance requirements. To enable exploration, optimization, and end-to-end evaluation on different embedded platforms, system designers critically need a benchmark suite that enables flexible and seamless configuration of self-driving scenarios, which realistically reflects real-world self-driving workloads’ unique characteristics. Existing CPU and GPU embedded benchmark suites typically (1) consider isolated applications, (2) are not sensor-driven, and (3) are unable to support emerging self-driving applications that simultaneously utilize CPUs and GPUs with stringent timing requirements. On the other hand, full-system self-driving simulators (e.g., AUTOWARE, APOLLO) focus on functional simulation, but lack the ability to evaluate the self-driving software stack on various embedded platforms. To address design needs, we present Chauffeur, the first open-source end-to-end benchmark suite for self-driving vehicles with configurable representative workloads. Chauffeur is easy to configure and run, enabling researchers to evaluate different platform configurations and explore alternative instantiations of the self-driving software pipeline. Chauffeur runs on diverse emerging platforms and exploits heterogeneous onboard resources. Our initial characterization of Chauffeur on different embedded platforms – NVIDIA Jetson TX2 and Drive PX2 – enables comparative evaluation of these GPU platforms in executing an end-to-end self-driving computational pipeline to assess the end-to-end response times on these emerging embedded platforms while also creating opportunities to create application gangs for better response times. Chauffeur enables researchers to benchmark representative self-driving workloads and flexibly compose them for different self-driving scenarios to explore end-to-end tradeoffs between design constraints, power budget, real-time performance requirements, and accuracy of applications.


2021 ◽  
Vol 13 (19) ◽  
pp. 3933
Author(s):  
Chuan Huang ◽  
Zhongyu Li ◽  
Mingyue Lou ◽  
Xingye Qiu ◽  
Hongyang An ◽  
...  

The BeiDou navigation satellite system shows its potential for passive radar vessel target detection owing to its global-scale coverage. However, the restrained power budget from BeiDou satellite hampers the detection performance. To solve this limitation, this paper proposes a long-time optimized integration method to obtain an adequate signal-to-noise ratio (SNR). During the long observation time, the range migration, intricate Doppler migration, and noncoherence characteristic bring challenges to the integration processing. In this paper, first, the keystone transform is applied to correct the range walk. Then, considering the noncoherence of the entire echo, the hybrid integration strategy is adopted. To remove the Doppler migration and correct the residual range migration, the long-time integration is modeled as an optimization problem. Finally, the particle swarm optimization (PSO) algorithm is applied to solve the optimization problem, after which the target echo over the long observation time is well concentrated, providing a reliable detection performance for the BeiDou-based passive radar. Its effectiveness is shown by the simulated and experimental results.


Sign in / Sign up

Export Citation Format

Share Document