scholarly journals Deep Learning applied to the handoff of cellular systems: a survey

Author(s):  
Federico Aguirre

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Times New Roman'; min-height: 15.0px} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 9.0px 'Times New Roman'} span.s1 {font: 12.0px 'Times New Roman'} <p><br></p> <p> <b>Mobility is a key aspect in current cellular networks, allowing users to access the provided services almost anywhere. When a user transitions from a base station’s coverage area to another cell being serviced by another station, a handoff process takes place, where resources are released in the first base station, and allocated in the second for the purpose of servicing the user. Predicting the future location of a cell phone user allows the handoff process to be optimized. This optimization allows for a better utilization of the available resources, regarding bot the transmitted power and the frequency allocation, resulting in less amount of wasted power in unwanted directions and the possibility of reusing frequencies in a single base station. To achieve this goal, Deep Learning techniques are proposed, which have proven to be efficient tools for predicting and detecting patterns. The purpose of this paper is to give an overview of the state of the art in Deep Learning techniques for making spatio-temporal predictions, which could be used to optimize the handoff process in cellular systems. </b></p>

2019 ◽  
Author(s):  
Federico Aguirre

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Times New Roman'; min-height: 15.0px} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 9.0px 'Times New Roman'} span.s1 {font: 12.0px 'Times New Roman'} <p><br></p> <p> <b>Mobility is a key aspect in current cellular networks, allowing users to access the provided services almost anywhere. When a user transitions from a base station’s coverage area to another cell being serviced by another station, a handoff process takes place, where resources are released in the first base station, and allocated in the second for the purpose of servicing the user. Predicting the future location of a cell phone user allows the handoff process to be optimized. This optimization allows for a better utilization of the available resources, regarding bot the transmitted power and the frequency allocation, resulting in less amount of wasted power in unwanted directions and the possibility of reusing frequencies in a single base station. To achieve this goal, Deep Learning techniques are proposed, which have proven to be efficient tools for predicting and detecting patterns. The purpose of this paper is to give an overview of the state of the art in Deep Learning techniques for making spatio-temporal predictions, which could be used to optimize the handoff process in cellular systems. </b></p>


Author(s):  
Priti Y. Umratkar ◽  
Harshali Chalfe ◽  
S. K. Totade

The continuously use of mobile phone can be attributed to it can use in any places and thus have become one of the most widely used devices in mobile communication which makes it so important in our lives. The convenience and portability of cellphones has made it possible to be carried everywhere. e.g Churches, lecture halls, medical centers etc. Its benefit can create disturbance in some places when there is continuous beeping or ringtones of cell phones which becomes annoying when such noise is disturbance in areas where silence is required or the use or of mobile phone is restricted or prohibited like Libraries and Study rooms A mobile phone jammer is an instrument used to prevent cellular phones from receiving signals from base station. It is a device that transmit signal on the same frequency at which the GSM system operates, the jamming success when the mobile phones in the area where the jammer is located are disabled. The mobile phone jammer unit is intended for blocking all mobile phone types within designated indoor areas. The mobile Phone Jammer is a 'plug and play' unit, its installation is quick and its operation is easy. Once the mobile Phone Jammer is operating, all mobile phones present within the jamming coverage area are blocked, and cellular activity in the immediate surroundings (including incoming and outgoing calls, SMS, pictures sending, etc.) is jammer. This paper focuses on the design of a cell phone jammer to prevent the usage of mobile communication in restricted areas without interfering with the communication channels outside its range.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4486
Author(s):  
Niall O’Mahony ◽  
Sean Campbell ◽  
Lenka Krpalkova ◽  
Anderson Carvalho ◽  
Joseph Walsh ◽  
...  

Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.


Recently, DDoS attacks is the most significant threat in network security. Both industry and academia are currently debating how to detect and protect against DDoS attacks. Many studies are provided to detect these types of attacks. Deep learning techniques are the most suitable and efficient algorithm for categorizing normal and attack data. Hence, a deep neural network approach is proposed in this study to mitigate DDoS attacks effectively. We used a deep learning neural network to identify and classify traffic as benign or one of four different DDoS attacks. We will concentrate on four different DDoS types: Slowloris, Slowhttptest, DDoS Hulk, and GoldenEye. The rest of the paper is organized as follow: Firstly, we introduce the work, Section 2 defines the related works, Section 3 presents the problem statement, Section 4 describes the proposed methodology, Section 5 illustrate the results of the proposed methodology and shows how the proposed methodology outperforms state-of-the-art work and finally Section VI concludes the paper.


Computers ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 37 ◽  
Author(s):  
Luca Cappelletti ◽  
Tommaso Fontana ◽  
Guido Walter Di Donato ◽  
Lorenzo Di Tucci ◽  
Elena Casiraghi ◽  
...  

Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep learning techniques have opened the way to their application for solving difficult problems where human skill is not able to provide a reliable solution. Not surprisingly, some deep learners, mainly exploiting encoder-decoder architectures, have also been designed and applied to the task of missing data imputation. However, most of the proposed imputation techniques have not been designed to tackle “complex data”, that is high dimensional data belonging to datasets with huge cardinality and describing complex problems. Precisely, they often need critical parameters to be manually set or exploit complex architecture and/or training phases that make their computational load impracticable. In this paper, after clustering the state-of-the-art imputation techniques into three broad categories, we briefly review the most representative methods and then describe our data imputation proposals, which exploit deep learning techniques specifically designed to handle complex data. Comparative tests on genome sequences show that our deep learning imputers outperform the state-of-the-art KNN-imputation method when filling gaps in human genome sequences.


2019 ◽  
Vol 4 (4) ◽  
pp. 828-849 ◽  
Author(s):  
Daniel C. Elton ◽  
Zois Boukouvalas ◽  
Mark D. Fuge ◽  
Peter W. Chung

We review a recent groundswell of work which uses deep learning techniques to generate and optimize molecules.


2020 ◽  
Vol 12 (22) ◽  
pp. 3836
Author(s):  
Carlos García Rodríguez ◽  
Jordi Vitrià ◽  
Oscar Mora

In recent years, different deep learning techniques were applied to segment aerial and satellite images. Nevertheless, state of the art techniques for land cover segmentation does not provide accurate results to be used in real applications. This is a problem faced by institutions and companies that want to replace time-consuming and exhausting human work with AI technology. In this work, we propose a method that combines deep learning with a human-in-the-loop strategy to achieve expert-level results at a low cost. We use a neural network to segment the images. In parallel, another network is used to measure uncertainty for predicted pixels. Finally, we combine these neural networks with a human-in-the-loop approach to produce correct predictions as if developed by human photointerpreters. Applying this methodology shows that we can increase the accuracy of land cover segmentation tasks while decreasing human intervention.


2021 ◽  
Vol 13 (24) ◽  
pp. 5003
Author(s):  
Elisa Castelli ◽  
Enzo Papandrea ◽  
Alessio Di Roma ◽  
Ilaria Bloise ◽  
Mattia Varile ◽  
...  

In recent years, technology advancement has led to an enormous increase in the amount of satellite data. The availability of huge datasets of remote sensing measurements to be processed, and the increasing need for near-real-time data analysis for operational uses, has fostered the development of fast, efficient-retrieval algorithms. Deep learning techniques were recently applied to satellite data for retrievals of target quantities. Forward models (FM) are a fundamental part of retrieval code development and mission design, as well. Despite this, the application of deep learning techniques to radiative transfer simulations is still underexplored. The DeepLIM project, described in this work, aimed at testing the feasibility of the application of deep learning techniques at the design of the retrieval chain of an upcoming satellite mission. The Land Surface Temperature Mission (LSTM) is a candidate for Sentinel 9 and has, as the main target, the need, for the agricultural community, to improve sustainable productivity. To do this, the mission will carry a thermal infrared sensor to retrieve land-surface temperature and evapotranspiration rate. The LSTM land-surface temperature retrieval chain is used as a benchmark to test the deep learning performances when applied to Earth observation studies. Starting from aircraft campaign data and state-of-the-art FM simulations with the DART model, deep learning techniques are used to generate new spectral features. Their statistical behavior is compared to the original technique to test the generation performances. Then, the high spectral resolution simulations are convolved with LSTM spectral response functions to obtain the radiance in the LSTM spectral channels. Simulated observations are analyzed using two state-of-the-art retrieval codes and deep learning-based algorithms. The performances of deep learning algorithms show promising results for both the production of simulated spectra and target parameters retrievals, one of the main advances being the reduction in computational costs.


Urban Science ◽  
2018 ◽  
Vol 2 (3) ◽  
pp. 78 ◽  
Author(s):  
Deepank Verma ◽  
Arnab Jana ◽  
Krithi Ramamritham

The assessments on human perception of urban spaces are essential for the management and upkeep of surroundings. A large part of the previous studies is dedicated towards the visual appreciation and judgement of various physical features present in the surroundings. Visual qualities of the environment stimulate feelings of safety, pleasure, and belongingness. Scaling such assessments to cover city boundaries necessitates the assistance of state-of-the-art computer vision techniques. We developed a mobile-based application to collect visual datasets in the form of street-level imagery with the help of volunteers. We further utilised the potential of deep learning-based image analysis techniques in gaining insights into such datasets. In addition, we explained our findings with the help of environment variables which are related to individual satisfaction and wellbeing.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Tianlong Gu ◽  
Hongliang Chen ◽  
Chenzhong Bin ◽  
Liang Chang ◽  
Wei Chen

Deep learning systems have been phenomenally successful in the fields of computer vision, speech recognition, and natural language processing. Recently, researchers have adopted deep learning techniques to tackle collaborative filtering with implicit feedback. However, the existing methods generally profile both users and items directly, while neglecting the similarities between users’ and items’ neighborhoods. To this end, we propose the neighborhood attentional memory networks (NAMN), a deep learning recommendation model applying two dedicated memory networks to capture users’ neighborhood relations and items’ neighborhood relations respectively. Specifically, we first design the user neighborhood component and the item neighborhood component based on memory networks and attention mechanisms. Then, by the associative addressing scheme with the user and item memories in the neighborhood components, we capture the complex user-item neighborhood relations. Stacking multiple memory modules together yields deeper architectures exploring higher-order complex user-item neighborhood relations. Finally, the output module jointly exploits the user and item neighborhood information with the user and item memories to obtain the ranking score. Extensive experiments on three real-world datasets demonstrate significant improvements of the proposed NAMN method over the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document