Interference Management for the Coexistence of DTTV and LTE Systems within the Proposed Digital Dividend Band in Nigeria

Author(s):  
B. I. Bakare ◽  
V. E. Idigo ◽  
S. U. Nnebe

This paper seeks to present the Interference Management for the Coexistence of DTTV and LTE Systems within the proposed digital dividend band in Nigeria. The study focused on LTE Down-link (DL) signal from the nearest cell site interfering with the Digital Terrestrial Television (DTTV) fixed outdoor receiving antenna in Port Harcourt, Nigeria. The qualitative signal analysis of the DTTV systems is essential as DTTV system cannot start to operate in the newly formed frequency band without the evaluation of the possible harmful influence of the coexisting systems. This research work investigated the Compatibility of the two systems and the Probability of interference of Channel 17 (490MHz) and Channel 51 (693MHz) when DTTV and LTE systems coexist within the proposed Digital Dividend band. A test-bed approach method was adopted for the generation of the required simulation data. Star Time transmitting Station in Port Harcourt and Smile LTE 4G Communication LTE Base Station (eNBs) Network also in Port Harcourt were adopted as the Victim Link Transmitter (VLT) and Interfering Link Transmitter (ILT) respectively. Data was obtained, analyzed, and evaluated. It was observed from the simulation result that the probability of interference is a function of the separation distance between ILT and VLR. The Compatibility analysis result shows that the resulting C/I is above the protection criteria (19dB), that is there’s a minimal rate of interference. Hence, the interference issue can be managed when the two systems coexist in700MHz band. It was also established that DTTV channel 51 suffers more interference when compared with DTTV channel 17 for the same separation distance. The study recommended the minimum protection distance approach (Interference Avoidance method) as the interference management techniques when DTTV and LTE systems coexist in the proposed digital dividend (700MHz) band in Nigeria.

2017 ◽  
Vol 16 (7) ◽  
pp. 7031-7039
Author(s):  
Chamanpreet Kaur ◽  
Vikramjit Singh

Wireless sensor network has revolutionized the way computing and software services are delivered to the clients on demand. Our research work proposed a new method for cluster head selection having less computational complexity. It was also found that the modified approach has improved performance to that of the other clustering approaches. The cluster head election mechanism will include various parameters like maximum residual energy of a node, minimum separation distance and minimum distance to the mobile node. Each CH will create a TDMA schedule for the member nodes to transmit the data. Nodes will have various level of power for signal amplification. The three levels of power are used for amplifying the signal. As the member node will send only its own data to the cluster head, the power level of the member node is set to low. The cluster head will send the data of the whole cluster to the mobile node, therefore the power level of the cluster head is set to medium. High power level is used for mobile node which will send the data of the complete sector to the base station. Using low energy level for intra cluster transmissions (within the cluster) with respect to cluster head to mobile node transmission leads in saving much amount of energy. Moreover, multi-power levels also reduce the packet drop ratio, collisions and/ or interference for other signals. It was found that the proposed algorithm gives a much improved network lifetime as compared to existing work. Based on our model, multiple experiments have been conducted using different values of initial energy.


Author(s):  
Akindele Segun Afolabi ◽  
Shehu Ahmed ◽  
Olubunmi Adewale Akinola

<span lang="EN-US">Due to the increased demand for scarce wireless bandwidth, it has become insufficient to serve the network user equipment using macrocell base stations only. Network densification through the addition of low power nodes (picocell) to conventional high power nodes addresses the bandwidth dearth issue, but unfortunately introduces unwanted interference into the network which causes a reduction in throughput. This paper developed a reinforcement learning model that assisted in coordinating interference in a heterogeneous network comprising macro-cell and pico-cell base stations. The learning mechanism was derived based on Q-learning, which consisted of agent, state, action, and reward. The base station was modeled as the agent, while the state represented the condition of the user equipment in terms of Signal to Interference Plus Noise Ratio. The action was represented by the transmission power level and the reward was given in terms of throughput. Simulation results showed that the proposed Q-learning scheme improved the performances of average user equipment throughput in the network. In particular, </span><span lang="EN-US">multi-agent systems with a normal learning rate increased the throughput of associated user equipment by a whooping 212.5% compared to a macrocell-only scheme.</span>


2021 ◽  
Author(s):  
Joydev Ghosh

<div>This research work explores small cell densification as a key technique for next generation wireless network (NGWN). Small cell densification comprises space (i.e, dense deployment of femtocells) and spectrum (i.e., utilization of frequency band at large). The usage of femtocells not only improves the spectral efficiency (SE) of the Heterogeneous two-tier networks against conventional approach, but also it alleviates outage probability and enhances the achievable capacity. We yield an analytical framework to establish the density of the femto base station (FBS) to a monotonically increasing or decreasing function of distance or radius, respectively. This ensures the enhanced performance in spectrum sharing Orthogonal Frequency Division Multiple Access (OFDMA) femtocell network models. We also illustrate the influence of active Femto users (i.e., users in femtocells, and they are usually low mobility and located closer to the cell centre with less fading), cluster size (i.e., a group of adjacent macrocells which use all of the systems frequency assignments) via simulation results.</div>


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Qinbao Xu ◽  
Rizwan Akhtar ◽  
Xing Zhang ◽  
Changda Wang

In wireless sensor networks (WSNs), data provenance records the data source and the forwarding and the aggregating information of a packet on its way to the base station (BS). To conserve the energy and wireless communication bandwidth, the provenances are compressed at each node along the packet path. To perform the provenances compression in resource-tightened WSNs, we present a cluster-based arithmetic coding method which not only has a higher compression rate but also can encode and decode the provenance in an incremental manner; i.e., the provenance can be zoomed in and out like Google Maps. Such a decoding method raises the efficiencies of the provenance decoding and the data trust assessment. Furthermore, the relationship between the clustering size and the provenance size is formally analyzed, and then the optimal clustering size is derived as a mathematical function of the WSN’s size. Both the simulation and the test-bed experimental results show that our scheme outperforms the known arithmetic coding based provenance compression schemes with respect to the average provenance size, the energy consumption, and the communication bandwidth consumption.


2020 ◽  
Vol 13 (2) ◽  
pp. 35-42
Author(s):  
Mirza Imran ◽  
Abdul Khader P. Sheikh

The hydrological disasters have the largest share in global disaster list and in 2016 the Asia’s share was 41% of the global occurrence of flood disasters. The Jammu and Kashmir is one of the most flood-prone regions of the Indian Himalayas. In the 2014 floods, approximately 268 people died and 168004 houses were damaged. Pulwama, Srinagar, and Bandipora districts were severely affected with 102, 100 and 148 km 2 respectively submerged in floods. To predict and warn people before the actual event occur, the Early Warning Systems were developed. The Early Warning Systems (EWS) improve the preparedness of community towards the disaster. The EWS does not help to prevent floods but it helps to reduce the loss of life and property largely. A flood monitoring and EWS is proposed in this research work. This system is composed of base stations and a control center. The base station comprises of sensing module and processing module, which makes a localised prediction of water level and transmits predicted results and measured data to the control center. The control center uses a hybrid system of Adaptive Neuro-Fuzzy Inference System (ANFIS) model and the supervised machine learning technique, Linear Multiple Regression (LMR) model for water level prediction. This hybrid system presented the high accuracy of 93.53% for daily predictions and 99.91% for hourly predictions.


2017 ◽  
Vol 169 (2) ◽  
pp. 76-82
Author(s):  
Jakub KALKE ◽  
Marcin OPALIŃSKI ◽  
Paweł MAZURO

The article presents the reason for developing a 0D predictive and diagnostic model for opposed-piston (OP) engines. Firstly, a description of OP engines, together with their most important advantages and challenges are given together with current research work. Secondly, a PAMAR-4 engine characteristic is presented. After that the proposed 0D predictive model is described and compared with the commercially availible software. Test stand with most important sensors and solutions are presented. After that the custom Engine Control Unit software is characterized together with a 0D diagnostic model. Next part discusses specific challenges that still have to be solved. After that the preliminary test bed results are presented and compared to the 0D simulations. Finally, the summary together with possible future improvement of both 0D predictive model and test bed capabilities are given.


Author(s):  
Dhiman Chowdhury ◽  
Mohammad Sharif Miah ◽  
Md. Feroz Hossain ◽  
Uzzal Sarker

Emergency back-up power supply units are necessary in case of grid power shortage, considerably poor regulation and costly establishment of a power system facility. In this regard, power electronic converters based systems emerge as consistent, = properly controlled and inexpensive electrical energy providers. This paper presents an implemented design of a grid-tied emergency back-up power supply for medium and low power applications. There are a rectifier-link boost derived DC-DC battery charging circuit and a 4-switch push-pull power inverter (DC-AC) circuit, which are controlled by pulse width modulation (PWM) signals. A changeover relay based transfer switch controls the power flow towards the utility loads. During off-grid situations, loads are fed power by the proposed system and during on-grid situations, battery is charged by an AC-link rectifier-fed boost converter. Charging phenomenon of the battery is controlled by a relay switched protection circuit. Laboratory experiments are carried out extensively for different loads. Power quality assessments along with back-up durations are recorded and analyzed. In addition, a cost allocation affirms the economic feasibility of the proposed framework in case of reasonable consumer applications. The test-bed results corroborate the reliability of the research work.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6555
Author(s):  
Radwa Ahmed Osman ◽  
Sherine Nagy Saleh ◽  
Yasmine N. M. Saleh

The co-existence of fifth-generation (5G) and Internet-of-Things (IoT) has become inevitable in many applications since 5G networks have created steadier connections and operate more reliably, which is extremely important for IoT communication. During transmission, IoT devices (IoTDs) communicate with IoT Gateway (IoTG), whereas in 5G networks, cellular users equipment (CUE) may communicate with any destination (D) whether it is a base station (BS) or other CUE, which is known as device-to-device (D2D) communication. One of the challenges that face 5G and IoT is interference. Interference may exist at BSs, CUE receivers, and IoTGs due to the sharing of the same spectrum. This paper proposes an interference avoidance distributed deep learning model for IoT and device to any destination communication by learning from data generated by the Lagrange optimization technique to predict the optimum IoTD-D, CUE-IoTG, BS-IoTD and IoTG-CUE distances for uplink and downlink data communication, thus achieving higher overall system throughput and energy efficiency. The proposed model was compared to state-of-the-art regression benchmarks, which provided a huge improvement in terms of mean absolute error and root mean squared error. Both analytical and deep learning models reached the optimal throughput and energy efficiency while suppressing interference to any destination and IoTG.


Sign in / Sign up

Export Citation Format

Share Document