scholarly journals A Deep Learning Approach for Maximum Activity Links in D2D Communications

Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2941 ◽  
Author(s):  
Bocheng Yu ◽  
Xingjun Zhang ◽  
Francesco Palmieri ◽  
Erwan Creignou ◽  
Ilsun You

Mobile cellular communications are experiencing an exponential growth in traffic load on Long Term Evolution (LTE) eNode B (eNB) components. Such load can be significantly contained by directly sharing content among nearby users through device-to-device (D2D) communications, so that repeated downloads of the same data can be avoided as much as possible. Accordingly, for the purpose of improving the efficiency of content sharing and decreasing the load on the eNB, it is important to maximize the number of simultaneous D2D transmissions. Specially, maximizing the number of D2D links can not only improve spectrum and energy efficiency but can also reduce transmission delay. However, enabling maximum D2D links in a cellular network poses two major challenges. First, the interference between the D2D and cellular communications could critically affect their performance. Second, the minimum quality of service (QoS) requirement of cellular and D2D communication must be guaranteed. Therefore, a selection of active links is critical to gain the maximum number of D2D links. This can be formulated as a classical integer linear programming problem (link scheduling) that is known to be NP-hard. This paper proposes to obtain a set of network features via deep learning for solving this challenging problem. The idea is to optimize the D2D link schedule problem with a deep neural network (DNN). This makes a significant time reduction for delay-sensitive operations, since the computational overhead is mainly spent in the training process of the model. The simulation performed on a randomly generated link schedule problem showed that our algorithm is capable of finding satisfactory D2D link scheduling solutions by reducing computation time up to 90% without significantly affecting their accuracy.

Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1361 ◽  
Author(s):  
Tae-Won Ban ◽  
Woongsup Lee

Recently, device-to-device (D2D) communications have been attracting substantial attention because they can greatly improve coverage, spectral efficiency, and energy efficiency, compared to conventional cellular communications. They are also indispensable for the mobile caching network, which is an emerging technology for next-generation mobile networks. We investigate a cellular overlay D2D network where a dedicated radio resource is allocated for D2D communications to remove cross-interference with cellular communications and all D2D devices share the dedicated radio resource to improve the spectral efficiency. More specifically, we study a problem of radio resource management for D2D networks, which is one of the most challenging problems in D2D networks, and we also propose a new transmission algorithm for D2D networks based on deep learning with a convolutional neural network (CNN). A CNN is formulated to yield a binary vector indicating whether to allow each D2D pair to transmit data. In order to train the CNN and verify the trained CNN, we obtain data samples from a suboptimal algorithm. Our numerical results show that the accuracies of the proposed deep learning based transmission algorithm reach about 85%∼95% in spite of its simple structure due to the limitation in computing power.


2020 ◽  
Vol 38 (6A) ◽  
pp. 832-845
Author(s):  
Sajidah S. Mahmood ◽  
Laith J. Saud

Moving objects detection, type recognition, and traffic analysis in video-based surveillance systems is an active area of research which has many applications in road traffic monitoring. This paper is on using classical approaches of image processing to develop an efficient algorithm for computer vision based on traffic surveillance system that can detect and classify moving vehicles, besides serving some other traffic analysis issues like finding vehicles speed and heading, tracking specified vehicles, and finding traffic load. The algorithm is designed to be flexible for modification to fulfill the changes in design objectives, having limited computation time, giving good accuracy, and serves inexpensive implementation.  A 92% of success is achieved for the considered test, with the missed cases being abnormal that are not defined to the algorithm. The computation time, with a platform (hardware and software) dependent, the algorithm took to produce results was parts of milliseconds. A CNN based deep learning classifier was built and evaluated to judge the feasibility of involving a modern approach in the design for the targeted aims in this work. The modern NN based deep learning approach is very powerful and represents the choice for many very sophisticated applications, but when the purpose is restricted to limited requirements, as it is believed the case is here, the reason will be to use the classical image processing procedures.  In making choice, it is important to consider, among many things, accuracy, computation time, and simplicity of design, development, and implementation.  


2021 ◽  
Vol 10 (1) ◽  
pp. 18
Author(s):  
Quentin Cabanes ◽  
Benaoumeur Senouci ◽  
Amar Ramdane-Cherif

Cyber-Physical Systems (CPSs) are a mature research technology topic that deals with Artificial Intelligence (AI) and Embedded Systems (ES). They interact with the physical world via sensors/actuators to solve problems in several applications (robotics, transportation, health, etc.). These CPSs deal with data analysis, which need powerful algorithms combined with robust hardware architectures. On one hand, Deep Learning (DL) is proposed as the main solution algorithm. On the other hand, the standard design and prototyping methodologies for ES are not adapted to modern DL-based CPS. In this paper, we investigate AI design for CPS around embedded DL. The main contribution of this work is threefold: (1) We define an embedded DL methodology based on a Multi-CPU/FPGA platform. (2) We propose a new hardware design architecture of a Neural Network Processor (NNP) for DL algorithms. The computation time of a feed forward sequence is estimated to 23 ns for each parameter. (3) We validate the proposed methodology and the DL-based NNP using a smart LIDAR application use-case. The input of our NNP is a voxel grid hardware computed from 3D point cloud. Finally, the results show that our NNP is able to process Dense Neural Network (DNN) architecture without bias.


Author(s):  
S. Arokiaraj ◽  
Dr. N. Viswanathan

With the advent of Internet of things(IoT),HA (HA) recognition has contributed the more application in health care in terms of diagnosis and Clinical process. These devices must be aware of human movements to provide better aid in the clinical applications as well as user’s daily activity.Also , In addition to machine and deep learning algorithms, HA recognition systems has significantly improved in terms of high accurate recognition. However, the most of the existing models designed needs improvisation in terms of accuracy and computational overhead. In this research paper, we proposed a BAT optimized Long Short term Memory (BAT-LSTM) for an effective recognition of human activities using real time IoT systems. The data are collected by implanting the Internet of things) devices invasively. Then, proposed BAT-LSTM is deployed to extract the temporal features which are then used for classification to HA. Nearly 10,0000 dataset were collected and used for evaluating the proposed model. For the validation of proposed framework, accuracy, precision, recall, specificity and F1-score parameters are chosen and comparison is done with the other state-of-art deep learning models. The finding shows the proposed model outperforms the other learning models and finds its suitability for the HA recognition.


2020 ◽  
Vol 641 ◽  
pp. A67
Author(s):  
F. Sureau ◽  
A. Lechat ◽  
J.-L. Starck

The deconvolution of large survey images with millions of galaxies requires developing a new generation of methods that can take a space-variant point spread function into account. These methods have also to be accurate and fast. We investigate how deep learning might be used to perform this task. We employed a U-net deep neural network architecture to learn parameters that were adapted for galaxy image processing in a supervised setting and studied two deconvolution strategies. The first approach is a post-processing of a mere Tikhonov deconvolution with closed-form solution, and the second approach is an iterative deconvolution framework based on the alternating direction method of multipliers (ADMM). Our numerical results based on GREAT3 simulations with realistic galaxy images and point spread functions show that our two approaches outperform standard techniques that are based on convex optimization, whether assessed in galaxy image reconstruction or shape recovery. The approach based on a Tikhonov deconvolution leads to the most accurate results, except for ellipticity errors at high signal-to-noise ratio. The ADMM approach performs slightly better in this case. Considering that the Tikhonov approach is also more computation-time efficient in processing a large number of galaxies, we recommend this approach in this scenario.


2016 ◽  
Vol 2016 ◽  
pp. 1-13 ◽  
Author(s):  
L. M. Rasdi Rere ◽  
Mohamad Ivan Fanany ◽  
Aniati Murni Arymurthy

A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).


Geophysics ◽  
2020 ◽  
pp. 1-61
Author(s):  
Janaki Vamaraju ◽  
Jeremy Vila ◽  
Mauricio Araya-Polo ◽  
Debanjan Datta ◽  
Mohamed Sidahmed ◽  
...  

Migration techniques are an integral part of seismic imaging workflows. Least-squares reverse time migration (LSRTM) overcomes some of the shortcomings of conventional migration algorithms by compensating for illumination and removing sampling artifacts to increase spatial resolution. However, the computational cost associated with iterative LSRTM is high and convergence can be slow in complex media. We implement pre-stack LSRTM in a deep learning framework and adopt strategies from the data science domain to accelerate convergence. The proposed hybrid framework leverages the existing physics-based models and machine learning optimizers to achieve better and cheaper solutions. Using a time-domain formulation, we show that mini-batch gradients can reduce the computation cost by using a subset of total shots for each iteration. Mini-batch approach does not only reduce source cross-talk but also is less memory intensive. Combining mini-batch gradients with deep learning optimizers and loss functions can improve the efficiency of LSRTM. Deep learning optimizers such as the adaptive moment estimation are generally well suited for noisy and sparse data. We compare different optimizers and demonstrate their efficacy in mitigating migration artifacts. To accelerate the inversion, we adopt the regularised Huber loss function in conjunction. We apply these techniques to 2D Marmousi and 3D SEG/EAGE salt models and show improvements over conventional LSRTM baselines. The proposed approach achieves higher spatial resolution in less computation time measured by various qualitative and quantitative evaluation metrics.


2021 ◽  
Author(s):  
Neeraj Kumar Rathore ◽  
Varshali Jaiswal ◽  
Varsha Sharma ◽  
Sunita Varma

Abstract Deep-Convolution Neural Network (CNN) is the branch of computer science. Deep Learning CNN is a methodology that teaches computer systems to do what comes naturally to humans. It is a method that learns by example and experience. This is a heuristic-based method to solve computationally exhaustive problems that are not resolved in a polynomial computation time like NP-Hard problems. The purpose of this research is to develop a hybrid methodology for the detection and segmentation of flower images that utilize the extension of the deep CNN. The plant, leaf, and flower image detection are the most challenging issues due to a wide variety of classes, based on their amount of texture, color distinctiveness, shape distinctiveness, and different size. The proposed methodology is implemented in Matlab with deep learning Tool Box and the dataset of flower image is taken from Kaggle with five different classes like daisy, dandelion, rose, tulip, and sunflower. This methodology takes an input of different flower images from data sets, then converts it from RGB (Red, Green, Blue) color model to the L*a*b color model. L*a*b has reduced the effort of image segmentation. The flower image segmentation has been performed by the canny edge detection algorithm which provided better results. The implemented extended deep learning convolution neural network can accurately recognize varieties of flower images. The learning accuracy of the proposed hybrid method is up to 98% that is maximizing up to + 1.89% from state of the art.


Author(s):  
Andrés Ruiz-Tagle Palazuelos ◽  
Enrique López Droguett ◽  
Rodrigo Pascual

With the availability of cheaper multi-sensor systems, one has access to massive and multi-dimensional sensor data for fault diagnostics and prognostics. However, from a time, engineering and computational perspective, it is often cost prohibitive to manually extract useful features and to label all the data. To address these challenges, deep learning techniques have been used in the recent years. Within these, convolutional neural networks have shown remarkable performance in fault diagnostics and prognostics. However, this model present limitations from a prognostics and health management perspective: to improve its feature extraction generalization capabilities and reduce computation time, ill-based pooling operations are employed, which require sub-sampling of the data, thus loosing potentially valuable information regarding an asset’s degradation process. Capsule neural networks have been recently proposed to address these problems with strong results in computer vision–related classification tasks. This has motivated us to extend capsule neural networks for fault prognostics and, in particular, remaining useful life estimation. The proposed model, architecture and algorithm are tested and compared to other state-of-the art deep learning models on the benchmark Commercial Modular Aero Propulsion System Simulation turbofans data set. The results indicate that the proposed capsule neural networks are a promising approach for remaining useful life prognostics from multi-dimensional sensor data.


Sign in / Sign up

Export Citation Format

Share Document