A Comprehensive Survey of Deep Learning Models Based on Keras Framework

Author(s):  
Bahzad Taha Chicho ◽  
◽  
Amira Bibo Sallow ◽  

Python is one of the most widely adopted programming languages, having replaced a number of those in the field. Python is popular with developers for a variety of reasons, one of which is because it has an incredibly diverse collection of libraries that users can run. The most compelling reasons for adopting Keras come from its guiding principles, particularly those related to usability. Aside from the simplicity of learning and model construction, Keras has a wide variety of production deployment options and robust support for multiple GPUs and distributed training. A strong and easy-to-use free, open-source Python library is the most important tool for developing and evaluating deep learning models. The aim of this paper is to provide the most current survey of Keras in different aspects, which is a Python-based deep learning Application Programming Interface (API) that runs on top of the machine learning framework, TensorFlow. The mentioned library is used in conjunction with TensorFlow, PyTorch, CODEEPNEATM, and Pygame to allow integration of deep learning models such as cardiovascular disease diagnostics, graph neural networks, identifying health issues, COVID-19 recognition, skin tumors, image detection, and so on, in the applied area. Furthermore, the author used Keras's details, goals, challenges, significant outcomes, and the findings obtained using this method.

Author(s):  
Xiaodong Yi ◽  
Ziyue Luo ◽  
Chen Meng ◽  
Mengdi Wang ◽  
Guoping Long ◽  
...  

Author(s):  
Malusi Sibiya ◽  
Mbuyu Sumbwanyambe

Machine learning systems use different algorithms to detect the diseases affecting the plant leaves. Nevertheless, selecting a suitable machine learning framework differs from study to study, depending on the features and complexity of the software packages. This paper introduces a taxonomic inspection of the literature in deep learning frameworks for the detection of plant leaf diseases. The objective of this study is to identify the dominating software frameworks in the literature for modelling machine learning plant leaf disease detecting systems.


Data Science ◽  
2021 ◽  
pp. 1-15
Author(s):  
Jörg Schad ◽  
Rajiv Sambasivan ◽  
Christopher Woodward

Experimenting with different models, documenting results and findings, and repeating these tasks are day-to-day activities for machine learning engineers and data scientists. There is a need to keep control of the machine-learning pipeline and its metadata. This allows users to iterate quickly through experiments and retrieve key findings and observations from historical activity. This is the need that Arangopipe serves. Arangopipe is an open-source tool that provides a data model that captures the essential components of any machine learning life cycle. Arangopipe provides an application programming interface that permits machine-learning engineers to record the details of the salient steps in building their machine learning models. The components of the data model and an overview of the application programming interface is provided. Illustrative examples of basic and advanced machine learning workflows are provided. Arangopipe is not only useful for users involved in developing machine learning models but also useful for users deploying and maintaining them.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
Y Li ◽  
S Rao ◽  
A Hassaine ◽  
R Ramakrishnan ◽  
Y Zhu ◽  
...  

Abstract Background Forecasting incident heart failure is a critical demand for prevention. Recent research suggested the superior performance of deep learning models on the prediction tasks using electronic health records. However, even with a relatively accurate predictive performance, the major impediments to the wider use of deep learning models for clinical decision making are the difficulties of assigning a level of confidence to model predictions and the interpretability of predictions. Purpose We aimed to develop a deep learning framework for more accurate incident heart failure prediction, with provision of measures of uncertainty and interpretability. Methods We used a longitudinal linked electronic health records dataset, Clinical Practice Research Datalink, involving 788,880 patients, 8.3% of whom had an incident heart failure diagnosis. To embed the uncertainty estimation mechanism into the deep learning models, we developed a probabilistic framework based on a novel transformer deep learning model: deep Bayesian Gaussian processes (DBGP). We investigated the performance of incident heart failure prediction and uncertainty estimation for the model and validated it using an external held-out dataset. Diagnoses, medications, and age for each encounter were included as predictors. By comparing the uncertainty, we investigated the possibility of identifying the correct predictions from wrong ones to avoid potential misclassification. Using model distillation meant to mimic a well-trained complex model with simple models, we investigated the importance of associations between diagnoses, medications and heart failure with an interpretable linear regression component learned from DBGP. Results The DBGP achieved high precision with 0.941 as AUROC for external validation. More importantly, it showed the uncertainty information could distinguish the correct predictions from wrong ones, with significant difference (p-value with 500 samples) between distribution of uncertainties for negative predictions (3.21e-69 between true negative and false negative), and positive predictions (3.39e-22 between true positive and false positive). Utilising the distilled model, we can specify the contribution of each diagnosis and medication to heart failure prediction. For instance, Losartan/Fosinopril, Bisoprolol and Left bundle-branch block showed strong association to heart failure incidence with coefficient 0.11 (95% CI: 0.10, 0.12), 0.09 (0.08, 0.11) and 0.09 (0.07, 0.11) respectively; Peritoneal adhesions, Trochanteric bursitis and Galactorrhea showed strong disassociations with coefficient −0.07 (−0.09, −0.05), −0.07 (−0.09, −0.04) and −0.06 (−0.08, −0.04) individually. Conclusions Our novel probabilistic deep learning framework adds a measure of uncertainty the prediction and helps to mitigate misclassification. Model distillation provides an opportunity to interpret deep learning models and offers a data-driven perspective for risk factor analysis. Funding Acknowledgement Type of funding source: Public Institution(s). Main funding source(s): Oxford Martin School,University of Oxford; NIHR Oxford Biomedical Research Centre, University of Oxford


2021 ◽  
Author(s):  
Vinicius Cruzeiro ◽  
Madushanka Manathunga ◽  
Kenneth M. Merz, Jr. ◽  
Andreas Goetz

<div><div><div><p>The quantum mechanics/molecular mechanics (QM/MM) approach is an essential and well-established tool in computational chemistry that has been widely applied in a myriad of biomolecular problems in the literature. In this publication, we report the integration of the QUantum Interaction Computational Kernel (QUICK) program as an engine to perform electronic structure calculations in QM/MM simulations with AMBER. This integration is available through either a file-based interface (FBI) or an application programming interface (API). Since QUICK is an open-source GPU-accelerated code with multi-GPU parallelization, users can take advantage of “free of charge” GPU-acceleration in their QM/MM simulations. In this work, we discuss implementation details and give usage examples. We also investigate energy conservation in typical QM/MM simulations performed at the microcanonical ensemble. Finally, benchmark results for two representative systems, the N-methylacetamide (NMA) molecule and the photoactive yellow protein (PYP) in bulk water, show the performance of QM/MM simulations with QUICK and AMBER using a varying number of CPU cores and GPUs. Our results highlight the acceleration obtained from a single or multiple GPUs; we observed speedups of up to 38x between a single GPU vs. a single CPU core and of up to 2.6x when comparing four GPUs to a single GPU. Results also reveal speedups of up to 3.5x when the API is used instead of FBI.</p></div></div></div>


2015 ◽  
Vol 8 (2) ◽  
pp. 2271-2312 ◽  
Author(s):  
O. Conrad ◽  
B. Bechtel ◽  
M. Bock ◽  
H. Dietrich ◽  
E. Fischer ◽  
...  

Abstract. The System for Automated Geoscientific Analyses (SAGA) is an open-source Geographic Information System (GIS), mainly licensed under the GNU General Public License. Since its first release in 2004, SAGA has rapidly developed from a specialized tool for digital terrain analysis to a comprehensive and globally established GIS platform for scientific analysis and modeling. SAGA is coded in C++ in an object oriented design and runs under several operating systems including Windows and Linux. Key functional features of the modular organized software architecture comprise an application programming interface for the development and implementation of new geoscientific methods, an easily approachable graphical user interface with many visualization options, a command line interpreter, and interfaces to scripting and low level programming languages like R and Python. The current version 2.1.4 offers more than 700 tools, which are implemented in dynamically loadable libraries or shared objects and represent the broad scopes of SAGA in numerous fields of geoscientific endeavor and beyond. In this paper, we inform about the system's architecture, functionality, and its current state of development and implementation. Further, we highlight the wide spectrum of scientific applications of SAGA in a review of published studies with special emphasis on the core application areas digital terrain analysis, geomorphology, soil science, climatology and meteorology, as well as remote sensing.


2021 ◽  
Author(s):  
Nikolaos Bakalos ◽  
Iason Katsamenis ◽  
Eleni Eirini Karolou ◽  
Nikolaos Doulamis

Man overboard incidents in a maritime vessel are serious accidents where the rapid detection of the even is crucial for the safe retrieval of the person. To this end, the use of deep learning models as automatic detectors of these scenarios has been tested and proven efficient, however, the use of correct capturing methods is imperative in order for the learning framework to operate well. Thermal data can be a suitable method of monitoring, as they are not affected by illumination changes and are able to operate in rough conditions, such as open sea travel. We investigate the use of a convolutional autoencoder trained over thermal data, as a mechanism for the automatic detection of man overboard scenarios. Morever, we present a dataset that was created to emulate such events and was used for training and testing the algorithm.


2019 ◽  
Author(s):  
Thin Nguyen ◽  
Hang Le ◽  
Thomas P. Quinn ◽  
Tri Nguyen ◽  
Thuc Duy Le ◽  
...  

AbstractThe development of new drugs is costly, time consuming, and often accompanied with safety issues. Drug repurposing can avoid the expensive and lengthy process of drug development by finding new uses for already approved drugs. In order to repurpose drugs effectively, it is useful to know which proteins are targeted by which drugs. Computational models that estimate the interaction strength of new drug--target pairs have the potential to expedite drug repurposing. Several models have been proposed for this task. However, these models represent the drugs as strings, which is not a natural way to represent molecules. We propose a new model called GraphDTA that represents drugs as graphs and uses graph neural networks to predict drug--target affinity. We show that graph neural networks not only predict drug--target affinity better than non-deep learning models, but also outperform competing deep learning methods. Our results confirm that deep learning models are appropriate for drug--target binding affinity prediction, and that representing drugs as graphs can lead to further improvements.Availability of data and materialsThe proposed models are implemented in Python. Related data, pre-trained models, and source code are publicly available at https://github.com/thinng/GraphDTA. All scripts and data needed to reproduce the post-hoc statistical analysis are available from https://doi.org/10.5281/[email protected]


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1077
Author(s):  
Philipp Tschannen ◽  
Ali Ahmed

Given the current state of software development, it does not seem that we are nowhere near vulnerability-free software applications, due to many reasons, and software developers are one of them. Insecure coding practices, the complexity of the task in hand, and usability issues, amongst other reasons, make it hard on software developers to maintain secure code. When it comes to cryptographic currencies, the need for assuring security is inevitable. For example, Bitcoin is a peer-to-peer software system that is primarily used as digital money. There exist many software libraries supporting various programming languages that allow access to the Bitcoin system via an Application Programming Interface (API). APIs that are inappropriately used would lead to security vulnerabilities, which are hard to discover, resulting in many zero-day exploits. Making APIs usable is, therefore, an essential aspect related to the quality and robustness of the software. This paper surveys the general academic literature concerning API usability and usable security. Furthermore, it evaluates the API usability of Libbitcoin, a well-known C++ implementation of the Bitcoin system, and assesses how the findings of this evaluation could affect the applications that use Libbitcoin. For that purpose, the paper proposes two static analysis tools to further investigate the use of Libbitcoin APIs in open-source projects from a security usability perspective. The findings of this research have improved Libbitcoin in many places, as will be shown in this paper.


Sign in / Sign up

Export Citation Format

Share Document