Cross-Layer Learning

2022 ◽  
pp. 62-90
Author(s):  
Tushar Mane ◽  
Ambika Pawar

Deep learning-based investigation mechanisms are available for conventional forensics, but not for IoT forensics. Dividing the system into different layers according to their functionalities, collecting data from each layer, finding the correlating factor, and using it for pattern detection is the fundamental concept behind the proposed intelligent system. The authors utilize this notion for embedding intelligence in forensics and speed up the investigation process by providing hints to the examiner. They propose a novel cross-layer learning architecture (CCLA) for IoT forensics. To the best of their knowledge, this is the first attempt to incorporate deep learning into the forensics of the IoT ecosystem.

Author(s):  
Nemania Borovits ◽  
Indika Kumara ◽  
Parvathy Krishnan ◽  
Stefano Dalla Palma ◽  
Dario Di Nucci ◽  
...  

Author(s):  
Brahim Jabir ◽  
Noureddine Falih

<span>In precision farming, identifying weeds is an essential first step in planning an integrated pest management program in cereals. By knowing the species present, we can learn about the types of herbicides to use to control them, especially in non-weeding crops where mechanical methods that are not effective (tillage, hand weeding, and hoeing and mowing). Therefore, using the deep learning based on convolutional neural network (CNN) will help to automatically identify weeds and then an intelligent system comes to achieve a localized spraying of the herbicides avoiding their large-scale use, preserving the environment. In this article we propose a smart system based on object detection models, implemented on a Raspberry, seek to identify the presence of relevant objects (weeds) in an area (wheat crop) in real time and classify those objects for decision support including spot spray with a chosen herbicide in accordance to the weed detected.</span>


2022 ◽  
Vol 6 (1) ◽  
Author(s):  
Marco Rossi ◽  
Sofia Vallecorsa

AbstractIn this work, we investigate different machine learning-based strategies for denoising raw simulation data from the ProtoDUNE experiment. The ProtoDUNE detector is hosted by CERN and it aims to test and calibrate the technologies for DUNE, a forthcoming experiment in neutrino physics. The reconstruction workchain consists of converting digital detector signals into physical high-level quantities. We address the first step in reconstruction, namely raw data denoising, leveraging deep learning algorithms. We design two architectures based on graph neural networks, aiming to enhance the receptive field of basic convolutional neural networks. We benchmark this approach against traditional algorithms implemented by the DUNE collaboration. We test the capabilities of graph neural network hardware accelerator setups to speed up training and inference processes.


Author(s):  
Xueying Wang ◽  
Guangli Li ◽  
Xiao Dong ◽  
Jiansong Li ◽  
Lei Liu ◽  
...  
Keyword(s):  

Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 411 ◽  
Author(s):  
Emanuele Torti ◽  
Alessandro Fontanella ◽  
Antonio Plaza ◽  
Javier Plaza ◽  
Francesco Leporati

One of the most important tasks in hyperspectral imaging is the classification of the pixels in the scene in order to produce thematic maps. This problem can be typically solved through machine learning techniques. In particular, deep learning algorithms have emerged in recent years as a suitable methodology to classify hyperspectral data. Moreover, the high dimensionality of hyperspectral data, together with the increasing availability of unlabeled samples, makes deep learning an appealing approach to process and interpret those data. However, the limited number of labeled samples often complicates the exploitation of supervised techniques. Indeed, in order to guarantee a suitable precision, a large number of labeled samples is normally required. This hurdle can be overcome by resorting to unsupervised classification algorithms. In particular, autoencoders can be used to analyze a hyperspectral image using only unlabeled data. However, the high data dimensionality leads to prohibitive training times. In this regard, it is important to realize that the operations involved in autoencoders training are intrinsically parallel. Therefore, in this paper we present an approach that exploits multi-core and many-core devices in order to achieve efficient autoencoders training in hyperspectral imaging applications. Specifically, in this paper, we present new OpenMP and CUDA frameworks for autoencoder training. The obtained results show that the CUDA framework provides a speed-up of about two orders of magnitudes as compared to an optimized serial processing chain.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 58279-58289 ◽  
Author(s):  
Cheng-Hsiung Lee ◽  
Jung-Sing Jwo ◽  
Han-Yi Hsieh ◽  
Ching-Sheng Lin

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 157730-157740
Author(s):  
Shu-Ming Tseng ◽  
Yung-Fang Chen ◽  
Cheng-Shun Tsai ◽  
Wen-Da Tsai

2020 ◽  
Vol 10 (19) ◽  
pp. 6866
Author(s):  
Arnauld Nzegha Fountsop ◽  
Jean Louis Ebongue Kedieng Fendji ◽  
Marcellin Atemkeng

Deep learning has been successfully showing promising results in plant disease detection, fruit counting, yield estimation, and gaining an increasing interest in agriculture. Deep learning models are generally based on several millions of parameters that generate exceptionally large weight matrices. The latter requires large memory and computational power for training, testing, and deploying. Unfortunately, these requirements make it difficult to deploy on low-cost devices with limited resources that are present at the fieldwork. In addition, the lack or the bad quality of connectivity in farms does not allow remote computation. An approach that has been used to save memory and speed up the processing is to compress the models. In this work, we tackle the challenges related to the resource limitation by compressing some state-of-the-art models very often used in image classification. For this we apply model pruning and quantization to LeNet5, VGG16, and AlexNet. Original and compressed models were applied to the benchmark of plant seedling classification (V2 Plant Seedlings Dataset) and Flavia database. Results reveal that it is possible to compress the size of these models by a factor of 38 and to reduce the FLOPs of VGG16 by a factor of 99 without considerable loss of accuracy.


Sign in / Sign up

Export Citation Format

Share Document