scholarly journals Power-efficient beam tracking during connected mode DRX in mmWave and sub-THz systems

Author(s):  
Syed Hashim Ali Shah ◽  
Sundar Aditya ◽  
Sundeep Rangan

Discontinuous reception (DRX), wherein a user equipment (UE) temporarily disables its receiver, is a critical power saving feature in modern cellular systems. DRX is likely to be aggressively used at mmWave and sub-THz frequencies due to the high front-end power consumption. A key challenge for DRX at these frequencies is blockage-induced link outages: A UE will likely need to track many directional links to ensure reliable multi-connectivity, thereby increasing the power consumption. In this paper, we explore reinforcement learning-based link tracking policies in connected mode DRX that reduce power consumption by tracking only a fraction of the available links, but without adversely affecting the outage and throughput performance. Through detailed, system level simulations at 28 GHz (5G) and 140 GHz (6G), we observe that even sub-optimal link tracking policies can achieve considerable power savings with relatively little degradation in outage and throughput performance, especially with digital beamforming at the UE. In particular, we show that it is feasible to reduce power consumption by 75% and still achieve up to 95% (80%) of the maximum throughput using digital beamforming at 28 GHz (140 GHz), subject to an outage probability of at most 1%.

2020 ◽  
Author(s):  
Syed Hashim Ali Shah ◽  
Sundar Aditya ◽  
Sundeep Rangan

Discontinuous reception (DRX), wherein a user equipment (UE) temporarily disables its receiver, is a critical power saving feature in modern cellular systems. DRX is likely to be aggressively used at mmWave and sub-THz frequencies due to the high front-end power consumption. A key challenge for DRX at these frequencies is blockage-induced link outages: A UE will likely need to track many directional links to ensure reliable multi-connectivity, thereby increasing the power consumption. In this paper, we explore reinforcement learning-based link tracking policies in connected mode DRX that reduce power consumption by tracking only a fraction of the available links, but without adversely affecting the outage and throughput performance. Through detailed, system level simulations at 28 GHz (5G) and 140 GHz (6G), we observe that even sub-optimal link tracking policies can achieve considerable power savings with relatively little degradation in outage and throughput performance, especially with digital beamforming at the UE. In particular, we show that it is feasible to reduce power consumption by 75% and still achieve up to 95% (80%) of the maximum throughput using digital beamforming at 28 GHz (140 GHz), subject to an outage probability of at most 1%.


2020 ◽  
Author(s):  
Syed Hashim Ali Shah ◽  
Sundeep Rangan ◽  
Sundar Aditya

Discontinuous reception (DRX), wherein a user equipment (UE) temporarily disables its receiver, is a critical power saving feature in modern cellular systems. DRX is likely to be aggressively used at mmWave and sub-THz frequencies due to the high front-end power consumption. A key challenge for DRX at these frequencies is blockage-induced link outages: A UE will likely need to track many directional links to ensure reliable multi-connectivity, thereby increasing the power consumption. In this paper, we explore reinforcement learning-based link tracking policies in connected mode DRX that reduce power consumption by tracking only a fraction of the available links, but without adversely affecting the outage and throughput performance. Through detailed, system level simulations at 28 GHz (5G) and 140 GHz (6G), we observe that even sub-optimal link tracking policies can achieve considerable power savings with relatively little degradation in outage and throughput performance, especially with digital beamforming at the UE. In particular, we show that it is feasible to reduce power consumption by 75% and still achieve up to 95% (80%) of the maximum throughput using digital beamforming at 28 GHz (140 GHz), subject to an outage probability of at most 1%.


2014 ◽  
Vol 556-562 ◽  
pp. 2076-2080
Author(s):  
Xiang Yu ◽  
Yao Song

With the rapid development of emerging applications and data transmission rate in LTE (Long Term Evolution) system, terminal power consumption is becoming seriously. In LTE, the discontinuous reception (DRX) mechanism was taken as an important method of saving energy of terminal. Through operations of all sorts of timers, a detailed analysis of the principle of DRX is introduced in this paper. And based on this, adjustable DRX long cycle is proposed. On the basis of the DRX semi-Markov process and ETSI data model, formulas of power saving factor and wake-up delay for adjustable LTE DRX are derived. Then from the comparisons of fixed frame DRX cycle and adjustable DRX cycle in terms of power saving and wake-up delay, we know that this optimizing mechanism has obvious improvement in the reduction of terminal power consumption.


2011 ◽  
Vol 16 (4) ◽  
pp. 66-72
Author(s):  
V.Sh. Melikyan ◽  
A.A. Durgaryan ◽  
H.P. Petrosyan ◽  
A.G. Stepanyan

A power and noise efficient solution for phase locked loop (PLL) is presented. A lock detector is implemented to deactivate the PLL components, except the voltage controlled oscillator (VCO), in the locked state. Signals deactivating/activating the PLL are discussed on system level. The introduced technique significantly saves power and decreases PLL output jitter. As a result whole PLL power consumption and output noise decreased about 35-38% in expense of approximately 17% area overhead


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 617 ◽  
Author(s):  
Yasir Mehmood ◽  
Lei Zhang ◽  
Anna Förster

Machine-type communication (MTC) is an emerging communication trend where intelligent machines are capable of communicating with each other without human intervention. Mobile cellular networks, with their wide range, high data rates, and continuously decreasing costs, offer a good infrastructure for implementing them. However, power consumption is a great issue, which has recently been addressed by 3GPP (3rd Generation Partnership Project) by defining power-saving mechanisms. In this paper, we address the problem of modeling these power-saving mechanisms. Currently existing modeling schemes do not consider the full range of states in the discontinuous reception (DRX) mechanism in LTE-A networks. We propose a semi-Markov based analytical model, which closes this gap and shows very good results in terms of predicting performance evaluation metrics, such as the power-saving factor and wake-up latency of MTC devices compared to simulation experiments. Furthermore, we offer an evaluation of the DRX parameters and their impact on power consumption of MTC devices.


2018 ◽  
Vol 27 (14) ◽  
pp. 1850230 ◽  
Author(s):  
Samaneh Babayan-Mashhadi ◽  
Mona Jahangiri-Khah

As power consumption is one of the major issues in biomedical implantable devices, in this paper, a novel quantization method is proposed for successive approximation register (SAR) analog-to-digital converters (ADCs) which can save 80% power consumption in contrast to conventional structure for electroencephalogram (EEG) signal recording systems. According to the characteristics of neural signals, the principle of the proposed power saving technique was inspired such that only the difference between current input sample and the previous one is quantized, using a power efficient SAR ADC with fewer resolutions. To verify the proposed quantization scheme, the ADC is systematically modeled in Matlab and designed and simulated in circuit level using 0.18[Formula: see text][Formula: see text]m CMOS technology. When applied to neural signal acquisition, spice simulations show that at sampling rate of 25[Formula: see text]kS/s, the proposed 8-bit ADC consumes 260[Formula: see text]nW of power from 1.8[Formula: see text]V supply voltage while achieving 7.1 effective number of bits.


Author(s):  
Alekhya Orugonda ◽  
V. Kiran Kumar

Background: It is important to minimize bandwidth that improves battery life, system reliability and other environmental concerns and energy optimization.It also do everything within their power to reduce the amount of data that flows through their pipes.To increase resource exertion, task consolidation is an effective technique, greatly enabled by virtualization technologies, which facilitate the concurrent execution of several tasks and, in turn, reduce energy consumption. : MaxUtil, which aims to maximize resource exertion, and Energy Conscious Task Consolidation which explicitly takes into account both active and idle energy consumption. Method: In this paper an Energy Aware Cloud Load Balancing Technique (EACLBT) is proposed for the performance improvement in terms of energy and run time. It predicts load of host after VM allocation and if according to prediction host become overloaded than VM will be created on different host. So it minimize the number of migrations due to host overloading conditions. This proposed technique results in minimize bandwidth and energy utilization. Results: The result shows that the energy efficient method has been proposed for monitor energy exhaustion and support static and dynamic system level optimization.The EACLBT can reduce the number of power-on physical machine and average power consumption compare to other deploy algorithms with power saving.Besides minimization in bandwidth along with energy exertion, reduction in the number of executed instructions is also achieved. Conclusion: This paper comprehensively describes the EACLBT (Energy Aware Cloud Load Balancing Technique) to deploy the virtual machines for power saving purpose. The average power consumption is used as performance metrics and the result of PALB is used as baseline. The EACLBT can reduce the number of power-on physical machine and average power consumption compare to other deploy algorithms with power saving. It shown that on average an idle server consumes approximately 70% of the power consumed by the server running at the full CPU speed.The performance holds better for Common sub utterance elimination. So, we can say the proposed Energy Aware Cloud Load Balancing Technique (EACLBT) is effective in bandwidth minimization and reduction of energy exertion.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4944 ◽  
Author(s):  
Mamta Agiwal ◽  
Mukesh Kumar Maheshwari ◽  
Hu Jin

Sensors enabled Internet of things (IoT) has become an integral part of the modern, digital and connected ecosystem. Narrowband IoT (NB-IoT) technology is one of its economical versions preferable when low power and resource limited sensors based applications are considered. One of the major characteristics of NB-IoT technology is its offer of reliable coverage enhancement (CE) which is achieved by repeating the transmission of signals. This repeated transmission of the same signal challenges power saving in low complexity NB-IoT devices. Additionally, the NB-IoT devices are expected to suffer from congestion due to simultaneous random access procedures (RAPs) from an enormous number of devices. Multiple RAP reattempts would further reduce the power saving in NB-IoT devices. We propose a novel power efficient RAP (PE-RAP) for reducing power consumption of NB-IoT devices in a highly congested environment. The existing RAP do not differentiate the failures due to poor channel conditions or due to collision. After the RAP failure either due to collision or poor channel, the devices can apply power ramping or can transit to a higher CE level with higher repetition configuration. In the proposed PE-RAP, the NB-IoT devices can re-ascertain the channel conditions after an RAP attempt failure such that the impediments due to poor channel are reduced. The power increments and repetition enhancements are applied only when necessary. We probabilistically obtain the chances of RAP reattempts. Subsequently, we evaluate the average power consumption by devices in different CE levels for different repetition configurations. We validate our analysis by simulation studies.


2016 ◽  
Vol 05 (02) ◽  
pp. 1650002 ◽  
Author(s):  
Larry R. D’Addario ◽  
Douglas Wang

Radio telescopes that employ arrays of many antennas are in operation, and ever larger ones are being designed and proposed. Signals from the antennas are combined by cross-correlation. While the cost of most components of the telescope is proportional to the number of antennas N, the cost and power consumption of cross-correlation are proportional to [Formula: see text] and dominate at sufficiently large N. Here, we report the design of an integrated circuit (IC) that performs digital cross-correlations for arbitrarily many antennas in a power-efficient way. It uses an intrinsically low-power architecture in which the movement of data between devices is minimized. In a large system, each IC performs correlations for all pairs of antennas but for a portion of the telescope’s bandwidth (the so-called “FX” structure). In our design, the correlations are performed in an array of 4096 complex multiply-accumulate (CMAC) units. This is sufficient to perform all correlations in parallel for 64 signals (N[Formula: see text]=[Formula: see text]32 antennas with two opposite-polarization signals per antenna). When N is larger, the input data are buffered in an on-chip memory and the CMACs are reused as many times as needed to compute all correlations. The design has been synthesized and simulated so as to obtain accurate estimates of the ICs size and power consumption. It is intended for fabrication in a 32[Formula: see text]nm silicon-on-insulator process, where it will require less than 12[Formula: see text]mm2 of silicon area and achieve an energy efficiency of 1.76–3.3[Formula: see text]pJ per CMAC operation, depending on the number of antennas. Operation has been analyzed in detail up to [Formula: see text]. The system-level energy efficiency, including board-level I/O, power supplies, and controls, is expected to be 5–7[Formula: see text]pJ per CMAC operation. Existing correlators for the JVLA ([Formula: see text]) and ALMA ([Formula: see text]) telescopes achieve about 5000[Formula: see text]pJ and 1000[Formula: see text]pJ, respectively using application-specific ICs (ASICs) in older technologies. To our knowledge, the largest-N existing correlator is LEDA at [Formula: see text]; it uses GPUs built in 28[Formula: see text]nm technology and achieves about 1000[Formula: see text]pJ. Correlators being designed for the SKA telescopes ([Formula: see text] and [Formula: see text]) using FPGAs in 16[Formula: see text]nm technology are predicted to achieve about 100[Formula: see text]pJ.


2014 ◽  
Vol E97.B (12) ◽  
pp. 2698-2705
Author(s):  
Tomoyuki HINO ◽  
Hitoshi TAKESHITA ◽  
Kiyo ISHII ◽  
Junya KURUMIDA ◽  
Shu NAMIKI ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document