scholarly journals Image Compression Based on Deep Learning: A Review

Author(s):  
Hajar Maseeh Yasin ◽  
Adnan Mohsin Abdulazeez

Image compression is an essential technology for encoding and improving various forms of images in the digital era. The inventors have extended the principle of deep learning to the different states of neural networks as one of the most exciting machine learning methods to show that it is the most versatile way to analyze, classify, and compress images. Many neural networks are required for image compressions, such as deep neural networks, artificial neural networks, recurrent neural networks, and convolution neural networks. Therefore, this review paper discussed how to apply the rule of deep learning to various neural networks to obtain better compression in the image with high accuracy and minimize loss and superior visibility of the image. Therefore, deep learning and its application to different types of images in a justified manner with distinct analysis to obtain these things need deep learning.

Author(s):  
А.И. Сотников

Прогнозирование временных рядов стало очень интенсивной областью исследований, число которых в последние годы даже увеличивается. Глубокие нейронные сети доказали свою эффективность и достигают высокой точности во многих областях применения. По этим причинам в настоящее время они являются одним из наиболее широко используемых методов машинного обучения для решения проблем, связанных с большими данными. Time series forecasting has become a very intensive area of research, the number of which has even increased in recent years. Deep neural networks have been proven to be effective and achieve high accuracy in many applications. For these reasons, they are currently one of the most widely used machine learning methods for solving big data problems.


2021 ◽  
Author(s):  
Anwaar Ulhaq

Machine learning has grown in popularity and effectiveness over the last decade. It has become possible to solve complex problems, especially in artificial intelligence, due to the effectiveness of deep neural networks. While numerous books and countless papers have been written on deep learning, new researchers want to understand the field's history, current trends and envision future possibilities. This review paper will summarise the recorded work that resulted in such success and address patterns and prospects.


2021 ◽  
Vol 2021 (11) ◽  
Author(s):  
L. Apolinário ◽  
N. F. Castro ◽  
M. Crispim Romão ◽  
J. G. Milhano ◽  
R. Pedro ◽  
...  

Abstract An important aspect of the study of Quark-Gluon Plasma (QGP) in ultrarelativistic collisions of heavy ions is the ability to identify, in experimental data, a subset of the jets that were strongly modified by the interaction with the QGP. In this work, we propose studying Deep Learning techniques for this purpose. Samples of Z+jet events were simulated in vacuum (pp collisions) and medium (PbPb collisions) and used to train Deep Neural Networks with the objective of discriminating between medium- and vacuum-like jets within the medium (PbPb) sample. Dedicated Convolutional Neural Networks, Dense Neural Networks and Recurrent Neural Networks were developed and trained, and their performance was studied. Our results show the potential of these techniques for the identification of jet quenching effects induced by the presence of the QGP.


2020 ◽  
Author(s):  
Thomas R. Lane ◽  
Daniel H. Foil ◽  
Eni Minerali ◽  
Fabio Urbina ◽  
Kimberley M. Zorn ◽  
...  

<p>Machine learning methods are attracting considerable attention from the pharmaceutical industry for use in drug discovery and applications beyond. In recent studies we have applied multiple machine learning algorithms, modeling metrics and in some cases compared molecular descriptors to build models for individual targets or properties on a relatively small scale. Several research groups have used large numbers of datasets from public databases such as ChEMBL in order to evaluate machine learning methods of interest to them. The largest of these types of studies used on the order of 1400 datasets. We have now extracted well over 5000 datasets from CHEMBL for use with the ECFP6 fingerprint and comparison of our proprietary software Assay Central<sup>TM</sup> with random forest, k-Nearest Neighbors, support vector classification, naïve Bayesian, AdaBoosted decision trees, and deep neural networks (3 levels). Model performance <a>was</a> assessed using an array of five-fold cross-validation metrics including area-under-the-curve, F1 score, Cohen’s kappa and Matthews correlation coefficient. <a>Based on ranked normalized scores for the metrics or datasets all methods appeared comparable while the distance from the top indicated Assay Central<sup>TM</sup> and support vector classification were comparable. </a>Unlike prior studies which have placed considerable emphasis on deep neural networks (deep learning), no advantage was seen in this case where minimal tuning was performed of any of the methods. If anything, Assay Central<sup>TM</sup> may have been at a slight advantage as the activity cutoff for each of the over 5000 datasets representing over 570,000 unique compounds was based on Assay Central<sup>TM</sup>performance, but support vector classification seems to be a strong competitor. We also apply Assay Central<sup>TM</sup> to prospective predictions for PXR and hERG to further validate these models. This work currently appears to be the largest comparison of machine learning algorithms to date. Future studies will likely evaluate additional databases, descriptors and algorithms, as well as further refining methods for evaluating and comparing models. </p><p><b> </b></p>


2020 ◽  
Vol 4 (5) ◽  
pp. 899-906
Author(s):  
Olvy Diaz Annesa ◽  
Condro Kartiko ◽  
Agi Prasetiadi

Reptiles are one of the most common fauna in the territory of Indonesia. quite a lot of people who have an interest in knowing more about this fauna in order to increase knowledge. Based on previous research, Deep Learning is needed in particular the CNN method for computer programs to identify reptile species through images. This reseacrh aims to determine the right model in producing high accuracy in the identification of reptile species. Thousands of images are generated through data augmentation processes for manually captured images. Using the Python programming language and Dropout technique, an accuracy of 93% was obtained by this research in identifying 14 different types of reptiles.  


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6410
Author(s):  
Ke Zang ◽  
Wenqi Wu ◽  
Wei Luo

Deep learning models, especially recurrent neural networks (RNNs), have been successfully applied to automatic modulation classification (AMC) problems recently. However, deep neural networks are usually overparameterized, i.e., most of the connections between neurons are redundant. The large model size hinders the deployment of deep neural networks in applications such as Internet-of-Things (IoT) networks. Therefore, reducing parameters without compromising the network performance via sparse learning is often desirable since it can alleviates the computational and storage burdens of deep learning models. In this paper, we propose a sparse learning algorithm that can directly train a sparsely connected neural network based on the statistics of weight magnitude and gradient momentum. We first used the MNIST and CIFAR10 datasets to demonstrate the effectiveness of this method. Subsequently, we applied it to RNNs with different pruning strategies on recurrent and non-recurrent connections for AMC problems. Experimental results demonstrated that the proposed method can effectively reduce the parameters of the neural networks while maintaining model performance. Moreover, we show that appropriate sparsity can further improve network generalization ability.


2018 ◽  
Vol 9 (1) ◽  
pp. 33-39 ◽  
Author(s):  
Subarno Pal ◽  
Soumadip Ghosh ◽  
Amitava Nag

Long short-term memory (LSTM) is a special type of recurrent neural network (RNN) architecture that was designed over simple RNNs for modeling temporal sequences and their long-range dependencies more accurately. In this article, the authors work with different types of LSTM architectures for sentiment analysis of movie reviews. It has been showed that LSTM RNNs are more effective than deep neural networks and conventional RNNs for sentiment analysis. Here, the authors explore different architectures associated with LSTM models to study their relative performance on sentiment analysis. A simple LSTM is first constructed and its performance is studied. On subsequent stages, the LSTM layer is stacked one upon another which shows an increase in accuracy. Later the LSTM layers were made bidirectional to convey data both forward and backward in the network. The authors hereby show that a layered deep LSTM with bidirectional connections has better performance in terms of accuracy compared to the simpler versions of LSTM used here.


2020 ◽  
Vol 15 ◽  
Author(s):  
Zichao Chen ◽  
Qi Zhou ◽  
Aziz Khan Turlandi ◽  
Jordan Jill ◽  
Rixin Xiong ◽  
...  

: Deep Learning (DL) is a novel type of Machine Learning (ML) model. It is showing increasing promise in medicine, study and treatment of diseases and injuries, to assist in data classification, novel disease symptoms and complicated decision making. Deep learning is the form of machine learning typically implemented via multi-level neural networks. This work discuss the pros and cons of using DL in clinical cardiology that also apply in medicine in general, while proposing certain directions as the more viable for clinical use. DL models called deep neural networks (DNNs), recurrent neural networks (RNNs) and convolutional neural networks (CNNs) have been applied to arrhythmias, electrocardiogram, ultrasonic analysis, genomes and endomyocardial biopsy. Convincingly, the rusults of trained model are good, demonstrating the power of more expressive deep learning algorithms for clinical predictive modeling. In the future, more novel deep learning methods are expected to make a difference in the field of clinical medicines.


2021 ◽  
Vol 11 (9) ◽  
pp. 3883
Author(s):  
Spyridon Kardakis ◽  
Isidoros Perikos ◽  
Foteini Grivokostopoulou ◽  
Ioannis Hatzilygeroudis

Attention-based methods for deep neural networks constitute a technique that has attracted increased interest in recent years. Attention mechanisms can focus on important parts of a sequence and, as a result, enhance the performance of neural networks in a variety of tasks, including sentiment analysis, emotion recognition, machine translation and speech recognition. In this work, we study attention-based models built on recurrent neural networks (RNNs) and examine their performance in various contexts of sentiment analysis. Self-attention, global-attention and hierarchical-attention methods are examined under various deep neural models, training methods and hyperparameters. Even though attention mechanisms are a powerful recent concept in the field of deep learning, their exact effectiveness in sentiment analysis is yet to be thoroughly assessed. A comparative analysis is performed in a text sentiment classification task where baseline models are compared with and without the use of attention for every experiment. The experimental study additionally examines the proposed models’ ability in recognizing opinions and emotions in movie reviews. The results indicate that attention-based models lead to great improvements in the performance of deep neural models showcasing up to a 3.5% improvement in their accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1313
Author(s):  
Tejas Pandey ◽  
Dexmont Pena ◽  
Jonathan Byrne ◽  
David Moloney

In this paper, we study deep learning approaches for monocular visual odometry (VO). Deep learning solutions have shown to be effective in VO applications, replacing the need for highly engineered steps, such as feature extraction and outlier rejection in a traditional pipeline. We propose a new architecture combining ego-motion estimation and sequence-based learning using deep neural networks. We estimate camera motion from optical flow using Convolutional Neural Networks (CNNs) and model the motion dynamics using Recurrent Neural Networks (RNNs). The network outputs the relative 6-DOF camera poses for a sequence, and implicitly learns the absolute scale without the need for camera intrinsics. The entire trajectory is then integrated without any post-calibration. We evaluate the proposed method on the KITTI dataset and compare it with traditional and other deep learning approaches in the literature.


Sign in / Sign up

Export Citation Format

Share Document