scholarly journals Deep Learning on Computational-Resource-Limited Platforms: A Survey

2020 ◽  
Vol 2020 ◽  
pp. 1-19 ◽  
Author(s):  
Chunlei Chen ◽  
Peng Zhang ◽  
Huixiang Zhang ◽  
Jiangyan Dai ◽  
Yugen Yi ◽  
...  

Nowadays, Internet of Things (IoT) gives rise to a huge amount of data. IoT nodes equipped with smart sensors can immediately extract meaningful knowledge from the data through machine learning technologies. Deep learning (DL) is constantly contributing significant progress in smart sensing due to its dramatic superiorities over traditional machine learning. The promising prospect of wide-range applications puts forwards demands on the ubiquitous deployment of DL under various contexts. As a result, performing DL on mobile or embedded platforms is becoming a common requirement. Nevertheless, a typical DL application can easily exhaust an embedded or mobile device owing to a large amount of multiply and accumulate (MAC) operations and memory access operations. Consequently, it is a challenging task to bridge the gap between deep learning and resource-limited platforms. We summarize typical applications of resource-limited deep learning and point out that deep learning is an indispensable impetus of pervasive computing. Subsequently, we explore the underlying reasons for the high computational overhead of DL through reviewing the fundamental concepts including capacity, generalization, and backpropagation of a neural network. Guided by these concepts, we investigate on principles of representative research works, as well as three types of solutions: algorithmic design, computational optimization, and hardware revolution. In pursuant to these solutions, we identify challenges to be addressed.

2016 ◽  
Vol 12 (S325) ◽  
pp. 205-208
Author(s):  
Fernando Caro ◽  
Marc Huertas-Company ◽  
Guillermo Cabrera

AbstractIn order to understand how galaxies form and evolve, the measurement of the parameters related to their morphologies and also to the way they interact is one of the most relevant requirements. Due to the huge amount of data that is generated by surveys, the morphological and interaction analysis of galaxies can no longer rely on visual inspection. For dealing with such issue, new approaches based on machine learning techniques have been proposed in the last years with the aim of automating the classification process. We tested Deep Learning using images of galaxies obtained from CANDELS to study the accuracy achieved by this tool considering two different frameworks. In the first, galaxies were classified in terms of their shapes considering five morphological categories, while in the second, the way in which galaxies interact was employed for defining other five categories. The results achieved in both cases are compared and discussed.


2018 ◽  
Vol 39 (8) ◽  
pp. 1871-1877 ◽  
Author(s):  
Tomoaki Sonobe ◽  
Hitoshi Tabuchi ◽  
Hideharu Ohsugi ◽  
Hiroki Masumoto ◽  
Naohumi Ishitobi ◽  
...  

2021 ◽  
Author(s):  
Sidhant Idgunji ◽  
Madison Ho ◽  
Jonathan L. Payne ◽  
Daniel Lehrmann ◽  
Michele Morsilli ◽  
...  

<p>The growing digitization of fossil images has vastly improved and broadened the potential application of big data and machine learning, particularly computer vision, in paleontology. Recent studies show that machine learning is capable of approaching human abilities of classifying images, and with the increase in computational power and visual data, it stands to reason that it can match human ability but at much greater efficiency in the near future. Here we demonstrate this potential of using deep learning to identify skeletal grains at different levels of the Linnaean taxonomic hierarchy. Our approach was two-pronged. First, we built a database of skeletal grain images spanning a wide range of animal phyla and classes and used this database to train the model. We used a Python-based method to automate image recognition and extraction from published sources. Second, we developed a deep learning algorithm that can attach multiple labels to a single image. Conventionally, deep learning is used to predict a single class from an image; here, we adopted a Branch Convolutional Neural Network (B-CNN) technique to classify multiple taxonomic levels for a single skeletal grain image. Using this method, we achieved over 90% accuracy for both the coarse, phylum-level recognition and the fine, class-level recognition across diverse skeletal grains (6 phyla and 15 classes). Furthermore, we found that image augmentation improves the overall accuracy. This tool has potential applications in geology ranging from biostratigraphy to paleo-bathymetry, paleoecology, and microfacies analysis. Further improvement of the algorithm and expansion of the training dataset will continue to narrow the efficiency gap between human expertise and machine learning.</p>


Author(s):  
Rajasekaran Thangaraj ◽  
Sivaramakrishnan Rajendar ◽  
Vidhya Kandasamy

Healthcare motoring has become a popular research in recent years. The evolution of electronic devices brings out numerous wearable devices that can be used for a variety of healthcare motoring systems. These devices measure the patient's health parameters and send them for further processing, where the acquired data is analyzed. The analysis provides the patients or their relatives with the medical support required or predictions based on the acquired data. Cloud computing, deep learning, and machine learning technologies play a prominent role in processing and analyzing the data respectively. This chapter aims to provide a detailed study of IoT-based healthcare systems, a variety of sensors used to measure parameters of health, and various deep learning and machine learning approaches introduced for the diagnosis of different diseases. The chapter also highlights the challenges, open issues, and performance considerations for future IoT-based healthcare research.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4837 ◽  
Author(s):  
Stamatios Samaras ◽  
Eleni Diamantidou ◽  
Dimitrios Ataloglou ◽  
Nikos Sakellariou ◽  
Anastasios Vafeiadis ◽  
...  

Usage of Unmanned Aerial Vehicles (UAVs) is growing rapidly in a wide range of consumer applications, as they prove to be both autonomous and flexible in a variety of environments and tasks. However, this versatility and ease of use also brings a rapid evolution of threats by malicious actors that can use UAVs for criminal activities, converting them to passive or active threats. The need to protect critical infrastructures and important events from such threats has brought advances in counter UAV (c-UAV) applications. Nowadays, c-UAV applications offer systems that comprise a multi-sensory arsenal often including electro-optical, thermal, acoustic, radar and radio frequency sensors, whose information can be fused to increase the confidence of threat’s identification. Nevertheless, real-time surveillance is a cumbersome process, but it is absolutely essential to detect promptly the occurrence of adverse events or conditions. To that end, many challenging tasks arise such as object detection, classification, multi-object tracking and multi-sensor information fusion. In recent years, researchers have utilized deep learning based methodologies to tackle these tasks for generic objects and made noteworthy progress, yet applying deep learning for UAV detection and classification is considered a novel concept. Therefore, the need to present a complete overview of deep learning technologies applied to c-UAV related tasks on multi-sensor data has emerged. The aim of this paper is to describe deep learning advances on c-UAV related tasks when applied to data originating from many different sensors as well as multi-sensor information fusion. This survey may help in making recommendations and improvements of c-UAV applications for the future.


2020 ◽  
pp. 73-86
Author(s):  
Prof. M S S El Namaki ◽  

Problem solving is a daily occurrence in business and, also, in human brains. Businesses resort to a variety of modes in order to find an answer to these problems.Human brains adopt, also, a variety of measures to solve their own brand of problems. Artificial Intelligence technologies seem to have been extending a helping hand to business in the search for problem solving mechanisms. Machine learning and deep learning are currently recognized as prime modes for business insight and problem solving. Does the human brain possess competencies and instruments that could compare to the deep learning technologies adopted by AI?


2020 ◽  
Vol 6 (3) ◽  
pp. 27-32
Author(s):  
Artur S. Ter-Levonian ◽  
Konstantin A. Koshechkin

Introduction: Nowadays an increase in the amount of information creates the need to replace and update data processing technologies. One of the tasks of clinical pharmacology is to create the right combination of drugs for the treatment of a particular disease. It takes months and even years to create a treatment regimen. Using machine learning (in silico) allows predicting how to get the right combination of drugs and skip the experimental steps in a study that take a lot of time and financial expenses. Gradual preparation is needed for the Deep Learning of Drug Synergy, starting from creating a base of drugs, their characteristics and ways of interacting. Aim: Our review aims to draw attention to the prospect of the introduction of Deep Learning technology to predict possible combinations of drugs for the treatment of various diseases. Materials and methods: Literary review of articles based on the PUBMED project and related bibliographic resources over the past 5 years (2015–2019). Results and discussion: In the analyzed articles, Machine or Deep Learning completed the assigned tasks. It was able to determine the most appropriate combinations for the treatment of certain diseases, select the necessary regimen and doses. In addition, using this technology, new combinations have been identified that may be further involved in preclinical studies. Conclusions: From the analysis of the articles, we obtained evidence of the positive effects of Deep Learning to select “key” combinations for further stages of preclinical research.


Author(s):  
Oleksandr Dudin ◽  
◽  
Ozar Mintser ◽  
Oksana Sulaieva ◽  
◽  
...  

Introduction. Over the past few decades, thanks to advances in algorithm development, the introduction of available computing power, and the management of large data sets, machine learning methods have become active in various fields of life. Among them, deep learning possesses a special place, which is used in many spheres of health care and is an integral part and prerequisite for the development of digital pathology. Objectives. The purpose of the review was to gather the data on existing image analysis technologies and machine learning tools developed for the whole-slide digital images in pathology. Methods: Analysis of the literature on machine learning methods used in pathology, staps of automated image analysis, types of neural networks, their application and capabilities in digital pathology was performed. Results. To date, a wide range of deep learning strategies have been developed, which are actively used in digital pathology, and demonstrated excellent diagnostic accuracy. In addition to diagnostic solutions, the integration of artificial intelligence into the practice of pathomorphological laboratory provides new tools for assessing the prognosis and prediction of sensitivity to different treatments. Conclusions: The synergy of artificial intelligence and digital pathology is a key tool to improve the accuracy of diagnostics, prognostication and personalized medicine facilitation


2020 ◽  
Author(s):  
Haotian Guo ◽  
Xiaohu Song ◽  
Ariel B. Lindner

AbstractRNA-based regulation offers a promising alternative of protein-based transcriptional networks. However, designing synthetic riboregulators with desirable functionalities using arbitrary sequences remains challenging, due in part to insufficient exploration of RNA sequence-to-function landscapes. Here we report that CRISPR-Csy4 mediates a nearly all-or-none processing of precursor CRISPR RNAs (pre-crRNAs), by profiling Csy4 binding sites flanked by > 1 million random sequences. This represents an ideal sequence-to-function space for universal riboregulator designs. Lacking discernible sequence-structural commonality among processable pre-crRNAs, we trained a neural network for accurate classification (f1-score ≈ 0.93). Inspired by exhaustive probing of palindromic flanking sequences, we designed anti-CRISPR RNAs (acrRNAs) that suppress processing of pre-crRNAs via stem stacking. We validated machine-learning-guided designs with >30 functional pairs of acrRNAs and pre-crRNAs to achieve switch-like properties. This opens a wide range of plug-and-play applications tailored through pre-crRNA designs, and represents a programmable alternative to protein-based anti-CRISPRs.


At present situation network communication is at high risk for external and internal attacks due to large number of applications in various fields. The network traffic can be monitored to determine abnormality for software or hardware security mechanism in the network using Intrusion Detection System (IDS). As attackers always change their techniques of attack and find alternative attack methods, IDS must also evolve in response by adopting more sophisticated methods of detection .The huge growth in the data and the significant advances in computer hardware technologies resulted in the new studies existence in the deep learning field, including ID. Deep Learning (DL) is a subgroup of Machine Learning (ML) which is hinged on data description. The new model based on deep learning is presented in this research work to activate operation of IDS from modern networks. Model depicts combination of deep learning and machine learning, having capacity of wide range accurate analysis of traffic network. The new approach proposes non-symmetric deep auto encoder (NDAE) for learning the features in unsupervised manner. Furthermore, classification model is constructed using stacked NDAEs for classification. The performance is evaluated using a network intrusion detection analysis dataset, particularly the WSN Trace dataset. The contribution work is to implement advanced deep learning algorithm consists IDS use, which are efficient in taking instant measures in order to stop or minimize the malicious actions


Sign in / Sign up

Export Citation Format

Share Document