scholarly journals A COVID-19 Pandemic Artificial Intelligence–Based System With Deep Learning Forecasting and Automatic Statistical Data Acquisition: Development and Implementation Study (Preprint)

2021 ◽  
Author(s):  
Cheng-Sheng Yu ◽  
Shy-Shin Chang ◽  
Tzu-Hao Chang ◽  
Jenny L Wu ◽  
Yu-Jiun Lin ◽  
...  

BACKGROUND More than 79.2 million confirmed COVID-19 cases and 1.7 million deaths were caused by SARS-CoV-2; the disease was named COVID-19 by the World Health Organization. Control of the COVID-19 epidemic has become a crucial issue around the globe, but there are limited studies that investigate the global trend of the COVID-19 pandemic together with each country’s policy measures. OBJECTIVE We aimed to develop an online artificial intelligence (AI) system to analyze the dynamic trend of the COVID-19 pandemic, facilitate forecasting and predictive modeling, and produce a heat map visualization of policy measures in 171 countries. METHODS The COVID-19 Pandemic AI System (CPAIS) integrated two data sets: the data set from the Oxford COVID-19 Government Response Tracker from the Blavatnik School of Government, which is maintained by the University of Oxford, and the data set from the COVID-19 Data Repository, which was established by the Johns Hopkins University Center for Systems Science and Engineering. This study utilized four statistical and deep learning techniques for forecasting: autoregressive integrated moving average (ARIMA), feedforward neural network (FNN), multilayer perceptron (MLP) neural network, and long short-term memory (LSTM). With regard to 1-year records (ie, whole time series data), records from the last 14 days served as the validation set to evaluate the performance of the forecast, whereas earlier records served as the training set. RESULTS A total of 171 countries that featured in both databases were included in the online system. The CPAIS was developed to explore variations, trends, and forecasts related to the COVID-19 pandemic across several counties. For instance, the number of confirmed monthly cases in the United States reached a local peak in July 2020 and another peak of 6,368,591 in December 2020. A dynamic heat map with policy measures depicts changes in COVID-19 measures for each country. A total of 19 measures were embedded within the three sections presented on the website, and only 4 of the 19 measures were continuous measures related to financial support or investment. Deep learning models were used to enable COVID-19 forecasting; the performances of ARIMA, FNN, and the MLP neural network were not stable because their forecast accuracy was only better than LSTM for a few countries. LSTM demonstrated the best forecast accuracy for Canada, as the root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) were 2272.551, 1501.248, and 0.2723075, respectively. ARIMA (RMSE=317.53169; MAPE=0.4641688) and FNN (RMSE=181.29894; MAPE=0.2708482) demonstrated better performance for South Korea. CONCLUSIONS The CPAIS collects and summarizes information about the COVID-19 pandemic and offers data visualization and deep learning–based prediction. It might be a useful reference for predicting a serious outbreak or epidemic. Moreover, the system undergoes daily updates and includes the latest information on vaccination, which may change the dynamics of the pandemic.

Author(s):  
D J Samatha Naidu ◽  
M.Gurivi Reddy

The farmer is a backbone to nation, but majority of the cultivated crops in india affecting by various diseases at various stages of its cultivation. Recent research works shows that diseases are not providing accurate results and few identifying but not providing optimized solutions to the system. In proposed work, the recent developments of Artificial intelligence through Deep Learning show that AIR (Automatic Image Recognition systems) using CNN algorithm models can be very beneficial in such scenarios. The Rice leaf diseases images related dataset is not easily available to automate , so that we have created our own trained data set which is small in size hence we have used transfer learning to develop our Proposed model which supports deep learning models. The Proposed CNN architecture illustrated based on VGG-16 model and it is trained, tested on given dataset collected from rice fields and the internet. The accuracy of the proposed model is moderately accurate with 92.46%.


Author(s):  
Yaser AbdulAali Jasim

Nowadays, technology and computer science are rapidly developing many tools and algorithms, especially in the field of artificial intelligence.  Machine learning is involved in the development of new methodologies and models that have become a novel machine learning area of applications for artificial intelligence. In addition to the architectures of conventional neural network methodologies, deep learning refers to the use of artificial neural network architectures which include multiple processing layers. In this paper, models of the Convolutional neural network were designed to detect (diagnose) plant disorders by applying samples of healthy and unhealthy plant images analyzed by means of methods of deep learning. The models were trained using an open data set containing (18,000) images of ten different plants, including healthy plants. Several model architectures have been trained to achieve the best performance of (97 percent) when the respectively [plant, disease] paired are detected. This is a very useful information or early warning technique and a method that can be further improved with the substantially high-performance rate to support an automated plant disease detection system to work in actual farm conditions.


2019 ◽  
Vol 11 (3) ◽  
pp. 65-89 ◽  
Author(s):  
Vinayakumar R ◽  
Soman KP ◽  
Prabaharan Poornachandran

Recently, due to the advance and impressive results of deep learning techniques in the fields of image recognition, natural language processing and speech recognition for various long-standing artificial intelligence (AI) tasks, there has been a great interest in applying towards security tasks too. This article focuses on applying these deep taxonomy techniques to network intrusion detection system (N-IDS) with the aim to enhance the performance in classifying the network connections as either good or bad. To substantiate this to NIDS, this article models network traffic as a time series data, specifically transmission control protocol / internet protocol (TCP/IP) packets in a predefined time-window with a supervised deep learning methods such as recurrent neural network (RNN), identity matrix of initialized values typically termed as identity recurrent neural network (IRNN), long short-term memory (LSTM), clock-work RNN (CWRNN) and gated recurrent unit (GRU), utilizing connection records of KDDCup-99 challenge data set. The main interest is given to evaluate the performance of RNN over newly introduced method such as LSTM and IRNN to alleviate the vanishing and exploding gradient problem in memorizing the long-term dependencies. The efficient network architecture for all deep models is chosen based on comparing the performance of various network topologies and network parameters. The experiments of such chosen efficient configurations of deep models were run up to 1,000 epochs by varying learning-rates between 0.01-05. The observed results of IRNN are relatively close to the performance of LSTM on KDDCup-99 NIDS data set. In addition to KDDCup-99, the effectiveness of deep model architectures are evaluated on refined version of KDDCup-99: NSL-KDD and most recent one, UNSW-NB15 NIDS datasets.


2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 48-70
Author(s):  
Wei Ming Tan ◽  
T. Hui Teo

Prognostic techniques attempt to predict the Remaining Useful Life (RUL) of a subsystem or a component. Such techniques often use sensor data which are periodically measured and recorded into a time series data set. Such multivariate data sets form complex and non-linear inter-dependencies through recorded time steps and between sensors. Many current existing algorithms for prognostic purposes starts to explore Deep Neural Network (DNN) and its effectiveness in the field. Although Deep Learning (DL) techniques outperform the traditional prognostic algorithms, the networks are generally complex to deploy or train. This paper proposes a Multi-variable Time Series (MTS) focused approach to prognostics that implements a lightweight Convolutional Neural Network (CNN) with attention mechanism. The convolution filters work to extract the abstract temporal patterns from the multiple time series, while the attention mechanisms review the information across the time axis and select the relevant information. The results suggest that the proposed method not only produces a superior accuracy of RUL estimation but it also trains many folds faster than the reported works. The superiority of deploying the network is also demonstrated on a lightweight hardware platform by not just being much compact, but also more efficient for the resource restricted environment.


2021 ◽  
Vol 14 ◽  
pp. 263177452199062
Author(s):  
Benjamin Gutierrez Becker ◽  
Filippo Arcadu ◽  
Andreas Thalhammer ◽  
Citlalli Gamez Serna ◽  
Owen Feehan ◽  
...  

Introduction: The Mayo Clinic Endoscopic Subscore is a commonly used grading system to assess the severity of ulcerative colitis. Correctly grading colonoscopies using the Mayo Clinic Endoscopic Subscore is a challenging task, with suboptimal rates of interrater and intrarater variability observed even among experienced and sufficiently trained experts. In recent years, several machine learning algorithms have been proposed in an effort to improve the standardization and reproducibility of Mayo Clinic Endoscopic Subscore grading. Methods: Here we propose an end-to-end fully automated system based on deep learning to predict a binary version of the Mayo Clinic Endoscopic Subscore directly from raw colonoscopy videos. Differently from previous studies, the proposed method mimics the assessment done in practice by a gastroenterologist, that is, traversing the whole colonoscopy video, identifying visually informative regions and computing an overall Mayo Clinic Endoscopic Subscore. The proposed deep learning–based system has been trained and deployed on raw colonoscopies using Mayo Clinic Endoscopic Subscore ground truth provided only at the colon section level, without manually selecting frames driving the severity scoring of ulcerative colitis. Results and Conclusion: Our evaluation on 1672 endoscopic videos obtained from a multisite data set obtained from the etrolizumab Phase II Eucalyptus and Phase III Hickory and Laurel clinical trials, show that our proposed methodology can grade endoscopic videos with a high degree of accuracy and robustness (Area Under the Receiver Operating Characteristic Curve = 0.84 for Mayo Clinic Endoscopic Subscore ⩾ 1, 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 2 and 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 3) and reduced amounts of manual annotation. Plain language summary Patient, caregiver and provider thoughts on educational materials about prescribing and medication safety Artificial intelligence can be used to automatically assess full endoscopic videos and estimate the severity of ulcerative colitis. In this work, we present an artificial intelligence algorithm for the automatic grading of ulcerative colitis in full endoscopic videos. Our artificial intelligence models were trained and evaluated on a large and diverse set of colonoscopy videos obtained from concluded clinical trials. We demonstrate not only that artificial intelligence is able to accurately grade full endoscopic videos, but also that using diverse data sets obtained from multiple sites is critical to train robust AI models that could potentially be deployed on real-world data.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1514
Author(s):  
Seung-Ho Lim ◽  
WoonSik William Suh ◽  
Jin-Young Kim ◽  
Sang-Young Cho

The optimization for hardware processor and system for performing deep learning operations such as Convolutional Neural Networks (CNN) in resource limited embedded devices are recent active research area. In order to perform an optimized deep neural network model using the limited computational unit and memory of an embedded device, it is necessary to quickly apply various configurations of hardware modules to various deep neural network models and find the optimal combination. The Electronic System Level (ESL) Simulator based on SystemC is very useful for rapid hardware modeling and verification. In this paper, we designed and implemented a Deep Learning Accelerator (DLA) that performs Deep Neural Network (DNN) operation based on the RISC-V Virtual Platform implemented in SystemC in order to enable rapid and diverse analysis of deep learning operations in an embedded device based on the RISC-V processor, which is a recently emerging embedded processor. The developed RISC-V based DLA prototype can analyze the hardware requirements according to the CNN data set through the configuration of the CNN DLA architecture, and it is possible to run RISC-V compiled software on the platform, can perform a real neural network model like Darknet. We performed the Darknet CNN model on the developed DLA prototype, and confirmed that computational overhead and inference errors can be analyzed with the DLA prototype developed by analyzing the DLA architecture for various data sets.


2018 ◽  
Vol 7 (11) ◽  
pp. 418 ◽  
Author(s):  
Tian Jiang ◽  
Xiangnan Liu ◽  
Ling Wu

Accurate and timely information about rice planting areas is essential for crop yield estimation, global climate change and agricultural resource management. In this study, we present a novel pixel-level classification approach that uses convolutional neural network (CNN) model to extract the features of enhanced vegetation index (EVI) time series curve for classification. The goal is to explore the practicability of deep learning techniques for rice recognition in complex landscape regions, where rice is easily confused with the surroundings, by using mid-resolution remote sensing images. A transfer learning strategy is utilized to fine tune a pre-trained CNN model and obtain the temporal features of the EVI curve. Support vector machine (SVM), a traditional machine learning approach, is also implemented in the experiment. Finally, we evaluate the accuracy of the two models. Results show that our model performs better than SVM, with the overall accuracies being 93.60% and 91.05%, respectively. Therefore, this technique is appropriate for estimating rice planting areas in southern China on the basis of a pre-trained CNN model by using time series data. And more opportunity and potential can be found for crop classification by remote sensing and deep learning technique in the future study.


Author(s):  
Reza Yogaswara

Artificial Intelligence (AI) atau kecerdasan buatan menjadi penggerak revolusi industri 4.0 yang menjanjikan banyak kemudahan bagi sektor pemerintah maupun industri. Internet of Things (IoT) dan big data contohnya dimana AI dapat diimplementasikan, teknologi yang telah banyak diadopsi di era industri 4.0 ini mampu menghubungkan setiap perangkat, seseorang dapat mengotomatisasi semua perangkat tanpa harus berada di lokasi, lebih dari itu, saat ini telah banyak mesin yang dapat menginterprestasi suatu kondisi atau kejadian tertentu dengan bantuan AI, sebagaimana telah kamera cerdas pendeteksi kepadatan volume kendaraan di jalan raya menggunakan teknologi Deep Learning Neural Network, yang telah diimplementasikan pada beberapa Pemerintah Daerah Kabupaten dan Kota dalam mendukung program Smart City yang telah dicanangkan. Pada sektor industri, banyak juga dari mereka yang telah mengotomatisasi mesin produksi dan manufaktur menggunakan robot dan Artificial Intelligence, sehingga Industri 4.0 akan meningkatkan daya saing melalui perangkat cerdas, setiap entitas yang mampu menguasai teknologi ini disitulah keunggulan kompetitifnya (competitive advantage). Namun ditengah perkembangan industri 4.0 yang cukup masif pemerintah harus bergerak cepat dalam mengadopsi platform ini, jika tidak, mereka akan menurunkan efisiensi proses bisnis untuk menjaga stabilitas layanan publik. Oleh sebab itu diperlukan keilmuan dan pemahaman yang benar bagi pemerintah dalam menghadapai era Industri 4.0, dimana Chief Information Officer (CIO) dapat mengambil peranan penting dalam memberikan dukungan yang didasari atas keilmuan mereka terkait tren teknologi industri 4.0, khususnya AI yang telah banyak diadopsi di berbagai sektor.


Over the recent years, the term deep learning has been considered as one of the primary choice for handling huge amount of data. Having deeper hidden layers, it surpasses classical methods for detection of outlier in wireless sensor network. The Convolutional Neural Network (CNN) is a biologically inspired computational model which is one of the most popular deep learning approaches. It comprises neurons that self-optimize through learning. EEG generally known as Electroencephalography is a tool used for investigation of brain function and EEG signal gives time-series data as output. In this paper, we propose a state-of-the-art technique designed by processing the time-series data generated by the sensor nodes stored in a large dataset into discrete one-second frames and these frames are projected onto a 2D map images. A convolutional neural network (CNN) is then trained to classify these frames. The result improves detection accuracy and encouraging.


Sign in / Sign up

Export Citation Format

Share Document