scholarly journals Application of deep learning based artificial intelligence technology in identification of colorectal polyps

2021 ◽  
Vol 29 (20) ◽  
pp. 1201-1206
Author(s):  
Xing-Wang Zhu ◽  
Jun Yan ◽  
Ying-Li He ◽  
Gang Liu ◽  
Xun Li
2022 ◽  
Vol 30 (7) ◽  
pp. 1-23
Author(s):  
Hongwei Hou ◽  
Kunzhi Tang ◽  
Xiaoqian Liu ◽  
Yue Zhou

The aim of this article is to promote the development of rural finance and the further informatization of rural banks. Based on DL (deep learning) and artificial intelligence technology, data pre-processing and feature selection are conducted on the customer information of rural banks in a certain region, including the historical deposit and loan, transaction record, and credit information. Besides, four DL models are proposed with a precision of more than 87% by test to improve the simulation effect and explore the application of DL. The BLSTM-CNN (Bi-directional Long Short-Term Memory-Convolutional Neural Network) model with a precision of 95.8%, which integrates RNN (Recurrent Neural Network) and CNN (Convolutional Neural Network) in parallel, solves the shortcomings of RNN and CNN separately. The research result can provide a more reasonable prediction model for rural banks, and ideas for the development of rural informatization and promoting rural governance.


CONVERTER ◽  
2021 ◽  
pp. 651-658
Author(s):  
Jiang Yan, Wang Peipei

Artificial intelligence and deep learning technology are important technologies widely used in manufacturing industry.With the help of performance appraisal system to comprehensively evaluate the performance of teachers is a good measure. Therefore, it is very necessary to develop a performance appraisal system for university teachers by using artificial intelligence technology. This paper first demonstrates the feasibility of the development of performance appraisal system, and scientifically divides the user roles. According to the business requirements, the core business process of the system is established, and the system architecture and functional modules are designed. At the same time, this paper establishes the conceptual model and logical model of database. Finally, SSH framework and extjs framework are used to realize the functions of the system. In this paper, the reliability, stability and security of the system are tested to ensure that the system meets the functional and non functional requirements. The operation results show that the system has stable functions, simple operation and convenient maintenance, and basically meets the needs of users at different levels.


2021 ◽  
Author(s):  
Andrew R. Johnston

DeepMind, a recent artificial intelligence technology created at Google, references in its name the relationship in AI between models of cognition used in this technology‘s development and its new deep learning algorithms. This chapter shows how AI researchers have been attempting to reproduce applied learning strategies in humans but have difficulty accessing and visualizing the computational actions of their algorithms. Google created an interface for engaging with computational temporalities through the production of visual animations based on DeepMind machine-learning test runs of Atari 2600 video games. These machine play animations bear the traces of not only DeepMind‘s operations, but also of contemporary shifts in how computational time is accessed and understood.


2022 ◽  
Vol 2146 (1) ◽  
pp. 012024
Author(s):  
Wei Qi ◽  
Chun Ying ◽  
Sheng Yong ◽  
Guizhi Zhao ◽  
Lihua Wang

Abstract With the development and popularization of computer artificial intelligence technology, more and more intelligent machines are gradually produced. These intelligent machines have brought great convenience to people’s lives. This paper studies the control method of snake robot based on environment adaptability, which mainly explains the construction and stability of multi-modal CPG model. In addition, this paper also studies the trajectory tracking and dynamic obstacle avoidance of mobile robot based on deep learning.


2021 ◽  
Vol 17 ◽  
Author(s):  
Prashanth Kulkarni ◽  
Manjappa Mahadevappa ◽  
Srikar Chilakamarri

: Artificial intelligence technology is emerging as a promising entity in cardiovascular medicine, potentially improving diagnosis and patient care. In this article, we review the literature on artificial intelligence and its utility in cardiology. We provide a detailed description of concepts of artificial intelligence tools like machine learning, deep learning, and cognitive computing. This review discusses the current evidence, application, prospects, and limitations of artificial intelligence in cardiology.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Guangquan Xu ◽  
Guofeng Feng ◽  
Litao Jiao ◽  
Meiqi Feng ◽  
Xi Zheng ◽  
...  

With the extensive application of artificial intelligence technology in 5G and Beyond Fifth Generation (B5G) networks, it has become a common trend for artificial intelligence to integrate into modern communication networks. Deep learning is a subset of machine learning and has recently led to significant improvements in many fields. In particular, many 5G-based services use deep learning technology to provide better services. Although deep learning is powerful, it is still vulnerable when faced with 5G-based deep learning services. Because of the nonlinearity of deep learning algorithms, slight perturbation input by the attacker will result in big changes in the output. Although many researchers have proposed methods against adversarial attacks, these methods are not always effective against powerful attacks such as CW. In this paper, we propose a new two-stream network which includes RGB stream and spatial rich model (SRM) noise stream to discover the difference between adversarial examples and clean examples. The RGB stream uses raw data to capture subtle differences in adversarial samples. The SRM noise stream uses the SRM filters to get noise features. We regard the noise features as additional evidence for adversarial detection. Then, we adopt bilinear pooling to fuse the RGB features and the SRM features. Finally, the final features are input into the decision network to decide whether the image is adversarial or not. Experimental results show that our proposed method can accurately detect adversarial examples. Even with powerful attacks, we can still achieve a detection rate of 91.3%. Moreover, our method has good transferability to generalize to other adversaries.


Author(s):  
Yongmin Yoo ◽  
Dongjin Lim ◽  
Kyungsun Kim

Thanks to rapid development of artificial intelligence technology in recent years, the current artificial intelligence technology is contributing to many part of society. Education, environment, medical care, military, tourism, economy, politics, etc. are having a very large impact on society as a whole. For example, in the field of education, there is an artificial intelligence tutoring system that automatically assigns tutors based on student's level. In the field of economics, there are quantitative investment methods that automatically analyze large amounts of data to find investment laws to create investment models or predict changes in financial markets. As such, artificial intelligence technology is being used in various fields. So, it is very important to know exactly what factors have an important influence on each field of artificial intelligence technology and how the relationship between each field is connected. Therefore, it is necessary to analyze artificial intelligence technology in each field. In this paper, we analyze patent documents related to artificial intelligence technology. We propose a method for keyword analysis within factors using artificial intelligence patent data sets for artificial intelligence technology analysis. This is a model that relies on feature engineering based on deep learning model named KeyBERT, and using vector space model. A case study of collecting and analyzing artificial intelligence patent data was conducted to show how the proposed model can be applied to real-world problems.


2017 ◽  
Vol 85 (5) ◽  
pp. AB364-AB365 ◽  
Author(s):  
Michael F. Byrne ◽  
Nicolas Chapados ◽  
Florian Soudan ◽  
Clemens Oertel ◽  
Milagros L. Linares Pérez ◽  
...  

2021 ◽  
Vol 2078 (1) ◽  
pp. 012047
Author(s):  
Xiao Hu ◽  
Hao Wen

Abstract So far, artificial intelligence has gone through decades of development. Although artificial intelligence technology is not yet mature, it has already been applied in many walks of life. With the explosion of IoT technology in 2019, artificial intelligence has ushered in a new climax. It can be said that the development of IoT technology has led to the development of artificial intelligence once again. But the traditional deep learning model is very complex and redundant. The hardware environment of IoT can not afford the time and resources cost by the model which runs on the GPU originally, so model compression without decreasing accuracy rate so much is applicable in this situation. In this paper, we experimented with using two tricks for model compression: Pruning and Quantization. By utilizing these methods, we got a remarkable improvement in model simplification while retaining a relatively close accuracy.


Sign in / Sign up

Export Citation Format

Share Document