Relative Analysis on Algorithms and Applications of Deep Learning

Author(s):  
Senbagavalli M. ◽  
Sathiyamoorthi V. ◽  
D. Sudaroli Vijayakumar

Deep learning is an artificial intelligence function that reproduces the mechanisms of the human mind in processing records and evolving shapes to be used in selection construction. The main objective of this chapter is to provide a complete examination of deep learning algorithms and its applications in various fields. Deep learning has detonated in the public alertness, primarily as inspective and analytical products fill our world, in the form of numerous human-centered smart-world systems, with besieged advertisements, natural language supporters and interpreters, and prototype self-driving vehicle systems. Therefore, it provides a broad orientation for those seeking a primer on deep learning algorithms and its various applications, platforms, and uses in a variety of smart-world systems. Also, this survey delivers a precious orientation for new deep learning practitioners, as well as those seeking to innovate in the application of deep learning.

2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


Author(s):  
Nilesh Ade ◽  
Noor Quddus ◽  
Trent Parker ◽  
S.Camille Peres

One of the major implications of Industry 4.0 will be the application of digital procedures in process industries. Digital procedures are procedures that are accessed through a smart gadget such as a tablet or a phone. However, like paper-based procedures their usability is limited by their access. The issue of accessibility is magnified in tasks such as loading a hopper car with plastic pellets wherein the operators typically place the procedure at a safe distance from the worksite. This drawback can be tackled in the case of digital procedures using artificial intelligence-based voice enabled conversational agent (chatbot). As a part of this study, we have developed a chatbot for assisting digital procedure adherence. The chatbot is trained using the possible set of queries from the operator and text from the digital procedures through deep learning and provides responses using natural language generation. The testing of the chatbot is performed using a simulated conversation with an operator performing the task of loading a hopper car.


2021 ◽  
Author(s):  
Yew Kee Wong

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.


Author(s):  
Seonho Kim ◽  
Jungjoon Kim ◽  
Hong-Woo Chun

Interest in research involving health-medical information analysis based on artificial intelligence, especially for deep learning techniques, has recently been increasing. Most of the research in this field has been focused on searching for new knowledge for predicting and diagnosing disease by revealing the relation between disease and various information features of data. These features are extracted by analyzing various clinical pathology data, such as EHR (electronic health records), and academic literature using the techniques of data analysis, natural language processing, etc. However, still needed are more research and interest in applying the latest advanced artificial intelligence-based data analysis technique to bio-signal data, which are continuous physiological records, such as EEG (electroencephalography) and ECG (electrocardiogram). Unlike the other types of data, applying deep learning to bio-signal data, which is in the form of time series of real numbers, has many issues that need to be resolved in preprocessing, learning, and analysis. Such issues include leaving feature selection, learning parts that are black boxes, difficulties in recognizing and identifying effective features, high computational complexities, etc. In this paper, to solve these issues, we provide an encoding-based Wave2vec time series classifier model, which combines signal-processing and deep learning-based natural language processing techniques. To demonstrate its advantages, we provide the results of three experiments conducted with EEG data of the University of California Irvine, which are a real-world benchmark bio-signal dataset. After converting the bio-signals (in the form of waves), which are a real number time series, into a sequence of symbols or a sequence of wavelet patterns that are converted into symbols, through encoding, the proposed model vectorizes the symbols by learning the sequence using deep learning-based natural language processing. The models of each class can be constructed through learning from the vectorized wavelet patterns and training data. The implemented models can be used for prediction and diagnosis of diseases by classifying the new data. The proposed method enhanced data readability and intuition of feature selection and learning processes by converting the time series of real number data into sequences of symbols. In addition, it facilitates intuitive and easy recognition, and identification of influential patterns. Furthermore, real-time large-capacity data analysis is facilitated, which is essential in the development of real-time analysis diagnosis systems, by drastically reducing the complexity of calculation without deterioration of analysis performance by data simplification through the encoding process.


Author(s):  
Jay Rodge ◽  
Swati Jaiswal

Deep learning and Artificial intelligence (AI) have been trending these days due to the capability and state-of-the-art results that they provide. They have replaced some highly skilled professionals with neural network-powered AI, also known as deep learning algorithms. Deep learning majorly works on neural networks. This chapter discusses about the working of a neuron, which is a unit component of neural network. There are numerous techniques that can be incorporated while designing a neural network, such as activation functions, training, etc. to improve its features, which will be explained in detail. It has some challenges such as overfitting, which are difficult to neglect but can be overcome using proper techniques and steps that have been discussed. The chapter will help the academician, researchers, and practitioners to further investigate the associated area of deep learning and its applications in the autonomous vehicle industry.


2020 ◽  
Vol 9 (1) ◽  
pp. 2663-2667

In this century, Artificial Intelligence AI has gained lot of popularity because of the performance of the AI models with good accuracy scores. Natural Language Processing NLP which is a major subfield of AI deals with analysis of huge amounts of Natural Language data and processing it. Text Summarization is one of the major applications of NLP. The basic idea of Text Summarization is, when we have large news articles or reviews and we need a gist of news or reviews with in a short period of time then summarization will be useful. Text Summarization also finds its unique place in many applications like patent research, Help desk and customer support. There are numerous ways to build a Text Summarization Model but this paper will mainly focus on building a Text Summarization Model using seq2seq architecture and TensorFlow API.


2021 ◽  
Vol 13 (21) ◽  
pp. 11631
Author(s):  
Der-Jang Chi ◽  
Chien-Chou Chu

“Going concern” is a professional term in the domain of accounting and auditing. The issuance of appropriate audit opinions by certified public accountants (CPAs) and auditors is critical to companies as a going concern, as misjudgment and/or failure to identify the probability of bankruptcy can cause heavy losses to stakeholders and affect corporate sustainability. In the era of artificial intelligence (AI), deep learning algorithms are widely used by practitioners, and academic research is also gradually embarking on projects in various domains. However, the use of deep learning algorithms in the prediction of going concern remains limited. In contrast to those in the literature, this study uses long short-term memory (LSTM) and gated recurrent unit (GRU) for learning and training, in order to construct effective and highly accurate going-concern prediction models. The sample pool consists of the Taiwan Stock Exchange Corporation (TWSE) and the Taipei Exchange (TPEx) listed companies in 2004–2019, including 86 companies with going concern doubt and 172 companies without going concern doubt. In other words, 258 companies in total are sampled. There are 20 research variables, comprising 16 financial variables and 4 non-financial variables. The results are based on performance indicators such as accuracy, precision, recall/sensitivity, specificity, F1-scores, and Type I and Type II error rates, and both the LSTM and GRU models perform well. As far as accuracy is concerned, the LSTM model reports 96.15% accuracy while GRU shows 94.23% accuracy.


Sign in / Sign up

Export Citation Format

Share Document