scholarly journals Research Trend Analysis of Artificial Intelligence Rainfall Prediction Algorithms Based on Knowledge Networks

2021 ◽  
Vol 945 (1) ◽  
pp. 012073
Author(s):  
Ting Zhang ◽  
Soung Yue Liew ◽  
Xiao Yan Huang ◽  
How Chinh Lee ◽  
Dong Hong Qin

Abstract This research uses CiteSpace software as a tool to sort out the research results of artificial intelligence-based rainfall prediction models and algorithms in the China Knowledge Network Database (CNKI) and the “Web of Science” database, summarize relevant research hotspots and topics, and identify the latest research Trends, provide a reference for further advancement of rainfall prediction models and algorithms. Through knowledge network analysis, the following conclusions are drawn: (1) The literature based on rainfall prediction models and algorithms has shown an increasing trend over time. (2) It is scientific research institutions and colleges and universities of various countries that publish a large number of relevant documents. (3) The current research trend is deep learning and meteorological satellites. Neural networks tend to study a variety of data assimilation and hybrid models. (4) Global artificial intelligence-based rainfall prediction models and algorithms research results show that more emphasis is placed on deep learning algorithms Application trends.

Membranes ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 672
Author(s):  
Md. Ashrafuzzaman

Ion channels are linked to important cellular processes. For more than half a century, we have been learning various structural and functional aspects of ion channels using biological, physiological, biochemical, and biophysical principles and techniques. In recent days, bioinformaticians and biophysicists having the necessary expertise and interests in computer science techniques including versatile algorithms have started covering a multitude of physiological aspects including especially evolution, mutations, and genomics of functional channels and channel subunits. In these focused research areas, the use of artificial intelligence (AI), machine learning (ML), and deep learning (DL) algorithms and associated models have been found very popular. With the help of available articles and information, this review provide an introduction to this novel research trend. Ion channel understanding is usually made considering the structural and functional perspectives, gating mechanisms, transport properties, channel protein mutations, etc. Focused research on ion channels and related findings over many decades accumulated huge data which may be utilized in a specialized scientific manner to fast conclude pinpointed aspects of channels. AI, ML, and DL techniques and models may appear as helping tools. This review aims at explaining the ways we may use the bioinformatics techniques and thus draw a few lines across the avenue to let the ion channel features appear clearer.


2020 ◽  
Vol 63 (6) ◽  
pp. 900-912
Author(s):  
Oswalt Manoj S ◽  
Ananth J P

Abstract Rainfall prediction is the active area of research as it enables the farmers to move with the effective decision-making regarding agriculture in both cultivation and irrigation. The existing prediction models are scary as the prediction of rainfall depended on three major factors including the humidity, rainfall and rainfall recorded in the previous years, which resulted in huge time consumption and leveraged huge computational efforts associated with the analysis. Thus, this paper introduces the rainfall prediction model based on the deep learning network, convolutional long short-term memory (convLSTM) system, which promises a prediction based on the spatial-temporal patterns. The weights of the convLSTM are tuned optimally using the proposed Salp-stochastic gradient descent algorithm (S-SGD), which is the integration of Salp swarm algorithm (SSA) in the stochastic gradient descent (SGD) algorithm in order to facilitate the global optimal tuning of the weights and to assure a better prediction accuracy. On the other hand, the proposed deep learning framework is built in the MapReduce framework that enables the effective handling of the big data. The analysis using the rainfall prediction database reveals that the proposed model acquired the minimal mean square error (MSE) and percentage root mean square difference (PRD) of 0.001 and 0.0021.


2021 ◽  
Vol 13 (21) ◽  
pp. 11631
Author(s):  
Der-Jang Chi ◽  
Chien-Chou Chu

“Going concern” is a professional term in the domain of accounting and auditing. The issuance of appropriate audit opinions by certified public accountants (CPAs) and auditors is critical to companies as a going concern, as misjudgment and/or failure to identify the probability of bankruptcy can cause heavy losses to stakeholders and affect corporate sustainability. In the era of artificial intelligence (AI), deep learning algorithms are widely used by practitioners, and academic research is also gradually embarking on projects in various domains. However, the use of deep learning algorithms in the prediction of going concern remains limited. In contrast to those in the literature, this study uses long short-term memory (LSTM) and gated recurrent unit (GRU) for learning and training, in order to construct effective and highly accurate going-concern prediction models. The sample pool consists of the Taiwan Stock Exchange Corporation (TWSE) and the Taipei Exchange (TPEx) listed companies in 2004–2019, including 86 companies with going concern doubt and 172 companies without going concern doubt. In other words, 258 companies in total are sampled. There are 20 research variables, comprising 16 financial variables and 4 non-financial variables. The results are based on performance indicators such as accuracy, precision, recall/sensitivity, specificity, F1-scores, and Type I and Type II error rates, and both the LSTM and GRU models perform well. As far as accuracy is concerned, the LSTM model reports 96.15% accuracy while GRU shows 94.23% accuracy.


2020 ◽  
Author(s):  
Joon Lee

UNSTRUCTURED In contrast with medical imaging diagnostics powered by artificial intelligence (AI), in which deep learning has led to breakthroughs in recent years, patient outcome prediction poses an inherently challenging problem because it focuses on events that have not yet occurred. Interestingly, the performance of machine learning–based patient outcome prediction models has rarely been compared with that of human clinicians in the literature. Human intuition and insight may be sources of underused predictive information that AI will not be able to identify in electronic data. Both human and AI predictions should be investigated together with the aim of achieving a human-AI symbiosis that synergistically and complementarily combines AI with the predictive abilities of clinicians.


Nowadays, artificial intelligence applications invade all of the fields including medical applications field. Deep learning, a subfield of artificial intelligence, in particular, Convolutional Neural Networks (CNN), have quickly become the first choice for processing and analyzing medical images due to its performance and effectiveness. Diabetic retinopathy is a vision loss disease that infects people with diabetes. This disease damages the blood vessels in the retina, hence, leads to blindness. Due to the sensitivity and complications involved in managing diabetics, designing and developing automated systems to detect and grade diabetic retinopathy is considered one of the recent research areas in the world of medical image applications. In this paper, the aspects of deep learning field related to diabetic retinopathy have been discussed. Various concepts in deep learning including traditional Artificial Neural Network (ANN) algorithm, ANN drawbacks in context of computer vision and image processing applications, and the best algorithm to overcome ANN drawbacks, CNN, have been elucidated along with the architecture. The paper also reviews an extensive summary of some works in the current research trend and future applications of the DL algorithms in medical image analysis for DR detection and grading. Furthermore, various research gabs related to building such automated systems for medical image analysis have been conferred – such as imbalance dataset which is considered one of the main performance issues that should be handled, the need of high performance computational resources to train deep and efficient models and others. This is quite beneficial for researchers working in the domain of medical image analysis to handle DR.


10.2196/19918 ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. e19918
Author(s):  
Joon Lee

In contrast with medical imaging diagnostics powered by artificial intelligence (AI), in which deep learning has led to breakthroughs in recent years, patient outcome prediction poses an inherently challenging problem because it focuses on events that have not yet occurred. Interestingly, the performance of machine learning–based patient outcome prediction models has rarely been compared with that of human clinicians in the literature. Human intuition and insight may be sources of underused predictive information that AI will not be able to identify in electronic data. Both human and AI predictions should be investigated together with the aim of achieving a human-AI symbiosis that synergistically and complementarily combines AI with the predictive abilities of clinicians.


2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


2018 ◽  
Vol 15 (1) ◽  
pp. 6-28 ◽  
Author(s):  
Javier Pérez-Sianes ◽  
Horacio Pérez-Sánchez ◽  
Fernando Díaz

Background: Automated compound testing is currently the de facto standard method for drug screening, but it has not brought the great increase in the number of new drugs that was expected. Computer- aided compounds search, known as Virtual Screening, has shown the benefits to this field as a complement or even alternative to the robotic drug discovery. There are different methods and approaches to address this problem and most of them are often included in one of the main screening strategies. Machine learning, however, has established itself as a virtual screening methodology in its own right and it may grow in popularity with the new trends on artificial intelligence. Objective: This paper will attempt to provide a comprehensive and structured review that collects the most important proposals made so far in this area of research. Particular attention is given to some recent developments carried out in the machine learning field: the deep learning approach, which is pointed out as a future key player in the virtual screening landscape.


Sign in / Sign up

Export Citation Format

Share Document