scholarly journals ITFDS: Channel-Aware Integrated Time and Frequency-Based Downlink LTE Scheduling in MANET

Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3394
Author(s):  
Le Minh Tuan ◽  
Le Hoang Son ◽  
Hoang Viet Long ◽  
L. Rajaretnam Priya ◽  
K. Ruba Soundar ◽  
...  

One of the crucial problems in Industry 4.0 is how to strengthen the performance of mobile communication within mobile ad-hoc networks (MANETs) and mobile computational grids (MCGs). In communication, Industry 4.0 needs dynamic network connectivity with higher amounts of speed and bandwidth. In order to support multiple users for video calling or conferencing with high-speed transmission rates and low packet loss, 4G technology was introduced by the 3G Partnership Program (3GPP). 4G LTE is a type of 4G technology in which LTE stands for Long Term Evolution, followed to achieve 4G speeds. 4G LTE supports multiple users for downlink with higher-order modulation up to 64 quadrature amplitude modulation (QAM). With wide coverage, high reliability and large capacity, LTE networks are widely used in Industry 4.0. However, there are many kinds of equipment with different quality of service (QoS) requirements. In the existing LTE scheduling methods, the scheduler in frequency domain packet scheduling exploits the spatial, frequency, and multi-user diversity to achieve larger MIMO for the required QoS level. On the contrary, time-frequency LTE scheduling pays attention to temporal and utility fairness. It is desirable to have a new solution that combines both the time and frequency domains for real-time applications with fairness among users. In this paper, we propose a channel-aware Integrated Time and Frequency-based Downlink LTE Scheduling (ITFDS) algorithm, which is suitable for both real-time and non-real-time applications. Firstly, it calculates the channel capacity and quality using the channel quality indicator (CQI). Additionally, data broadcasting is maintained by using the dynamic class-based establishment (DCE). In the time domain, we calculate the queue length before transmitting the next packets. In the frequency domain, we use the largest weight delay first (LWDF) scheduling algorithm to allocate resources to all users. All the allocations would be taken placed in the same transmission time interval (TTI). The new method is compared against the largest weighted delay first (LWDF), proportional fair (PF), maximum throughput (MT), and exponential/proportional fair (EXP/PF) methods. Experimental results show that the performance improves by around 12% compared with those other algorithms.

2013 ◽  
Vol 333-335 ◽  
pp. 650-655
Author(s):  
Peng Hui Niu ◽  
Yin Lei Qin ◽  
Shun Ping Qu ◽  
Yang Lou

A new signal processing method for phase difference estimation was proposed based on time-varying signal model, whose frequency, amplitude and phase are time-varying. And then be applied Coriolis mass flowmeter signal. First, a bandpass filtering FIR filter was applied to filter the sensor output signal in order to improve SNR. Then, the signal frequency could be calculated based on short-time frequency estimation. Finally, by short window intercepting, the DTFT algorithm with negative frequency contribution was introduced to calculate the real-time phase difference between two enhanced signals. With the frequency and the phase difference obtained, the time interval of two signals was calculated. Simulation results show that the algorithms studied are efficient. Furthermore, the computation of algorithms studied is simple so that it can be applied to real-time signal processing for Coriolis mass flowmeter.


2021 ◽  
Vol 18 (3) ◽  
pp. 271-289
Author(s):  
Evgeniia Bulycheva ◽  
Sergey Yanchenko

Harmonic contributions of utility and customer may feature significant variations due to network switchings and changing operational modes. In order to correctly define the impacts on the grid voltage distortion the frequency dependent impedance characteristic of the studied network should be accurately measured in the real-time mode. This condition can be fulfilled by designing a stimuli generator measuring the grid impedance as a response to injected interference and producing time-frequency plots of harmonic contributions during considered time interval. In this paper a prototype of a stimuli generator based on programmable voltage source inverter is developed and tested. The use of ternary pulse sequence allows fast wide-band impedance measurements that meet the requirements of real-time assessment of harmonic contributions. The accuracy of respective analysis involving impedance determination and calculation of harmonic contributions is validated experimentally using reference characteristics of laboratory test set-up with varying grid impedance.


2021 ◽  
Author(s):  
◽  
Jiawen Chua

<p>In most real-time systems, particularly for applications involving system identification, latency is a critical issue. These applications include, but are not limited to, blind source separation (BSS), beamforming, speech dereverberation, acoustic echo cancellation and channel equalization. The system latency consists of an algorithmic delay and an estimation computational time. The latter can be avoided by using a multi-thread system, which runs the estimation process and the processing procedure simultaneously. The former, which consists of a delay of one window length, is usually unavoidable for the frequency-domain approaches. For frequency-domain approaches, a block of data is acquired by using a window, transformed and processed in the frequency domain, and recovered back to the time domain by using an overlap-add technique.  In the frequency domain, the convolutive model, which is usually used to describe the process of a linear time-invariant (LTI) system, can be represented by a series of multiplicative models to facilitate estimation. To implement frequency-domain approaches in real-time applications, the short-time Fourier transform (STFT) is commonly used. The window used in the STFT must be at least twice the room impulse response which is long, so that the multiplicative model is sufficiently accurate. The delay constraint caused by the associated blockwise processing window length makes most the frequency-domain approaches inapplicable for real-time systems.  This thesis aims to design a BSS system that can be used in a real-time scenario with minimal latency. Existing BSS approaches can be integrated into our system to perform source separation with low delay without affecting the separation performance. The second goal is to design a BSS system that can perform source separation in a non-stationary environment.  We first introduce a subspace approach to directly estimate the separation parameters in the low-frequency-resolution time-frequency (LFRTF) domain. In the LFRTF domain, a shorter window is used to reduce the algorithmic delay of the system during the signal acquisition, e.g., the window length is shorter than the room impulse response. The subspace method facilitates the deconvolution of a convolutive mixture to a new instantaneous mixture and simplifies the estimation process.  Second, we propose an alternative approach to address the algorithmic latency problem. The alternative method enables us to obtain the separation parameters in the LFRTF domain based on parameters estimated in the high-frequency-resolution time-frequency (HFRTF) domain, where the window length is longer than the room impulse response, without affecting the separation performance.  The thesis also provides a solution to address the BSS problem in a non-stationary environment. We utilize the ``meta-information" that is obtained from previous BSS operations to facilitate the separation in the future without performing the entire BSS process again. Repeating a BSS process can be computationally expensive. Most conventional BSS algorithms require sufficient signal samples to perform analysis and this prolongs the estimation delay. By utilizing information from the entire spectrum, our method enables us to update the separation parameters with only a single snapshot of observation data. Hence, our method minimizes the estimation period, reduces the redundancy and improves the efficacy of the system.  The final contribution of the thesis is a non-iterative method for impulse response shortening. This method allows us to use a shorter representation to approximate the long impulse response. It further improves the computational efficiency of the algorithm and yet achieves satisfactory performance.</p>


2019 ◽  
Vol 2019 ◽  
pp. 1-16
Author(s):  
Peyman Jafary ◽  
Antti Supponen ◽  
Mikko Salmenperä ◽  
Sami Repo

In an electrical distribution network, Logic Selectivity significantly reduces both the number and duration of outages. Generic Object-Oriented Substation Events (GOOSE) have a key role in the decision-making process of substation protection devices using GOOSE-based Logic Selectivity. GOOSE messages are exchanged between remote protection devices over the communication network. Secured communication with low latency and high reliability is therefore required in order to ensure reliable operation as well as meeting real-time requirement of the Logic Selectivity application. There is thus a need to evaluate feasibility of the selected communication network technology for Logic Selectivity use cases. This paper analyzes reliability of cellular 4G/LTE Internet for GOOSE communication in a Logic Selectivity application. For this purpose, experimental lab set-ups are introduced for different configurations: ordinary GOOSE communication, secured GOOSE communication by IPsec in Transport mode, and redundant GOOSE communication using the IEC 62439-3 Parallel Redundancy Protocol. In each configuration, the GOOSE retransmissions are recorded for a period of three days and the average GOOSE transmission time is measured. Furthermore, the measured data is classified into histograms and a probability value for communication reliability, based on the transmission time, is calculated. The statistical analysis shows that 4G Internet satisfies the real-time and reliability requirements for secure and highly available GOOSE-based Logic Selectivity.


2017 ◽  
Vol 66 (7) ◽  
pp. 1864-1873 ◽  
Author(s):  
Chun-Kwon Lee ◽  
Gu-Young Kwon ◽  
Seung Jin Chang ◽  
Moon Kang Jung ◽  
Jin Bae Park ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Peng Lu ◽  
Yabin Zhang ◽  
Bing Zhou ◽  
Hongpo Zhang ◽  
Liwei Chen ◽  
...  

In recent years, deep learning (DNN) based methods have made leapfrogging level breakthroughs in detecting cardiac arrhythmias as the cost effectiveness of arithmetic power, and data size has broken through the tipping point. However, the inability of these methods to provide a basis for modeling decisions limits clinicians’ confidence on such methods. In this paper, a Gate Recurrent Unit (GRU) and decision tree fusion model, referred to as (T-GRU), was designed to explore the problem of arrhythmia recognition and to improve the credibility of deep learning methods. The fusion model multipathway processing time-frequency domain featured the introduction of decision tree probability analysis of frequency domain features, the regularization of GRU model parameters and weight control to improve the decision tree model output weights. The MIT-BIH arrhythmia database was used for validation. Results showed that the low-frequency band features dominated the model prediction. The fusion model had an accuracy of 98.31%, sensitivity of 96.85%, specificity of 98.81%, and precision of 96.73%, indicating its high reliability and clinical significance.


Sign in / Sign up

Export Citation Format

Share Document