A Survey on Intelligence Tools for Data Analytics

Author(s):  
Shatakshi Singh ◽  
Kanika Gautam ◽  
Prachi Singhal ◽  
Sunil Kumar Jangir ◽  
Manish Kumar

The recent development in artificial intelligence is quite astounding in this decade. Especially, machine learning is one of the core subareas of AI. Also, ML field is an incessantly growing along with evolution and becomes a rise in its demand and importance. It transmogrified the way data is extracted, analyzed, and interpreted. Computers are trained to get in a self-training mode so that when new data is fed they can learn, grow, change, and develop themselves without explicit programming. It helps to make useful predictions that can guide better decisions in a real-life situation without human interference. Selection of ML tool is always a challenging task, since choosing an appropriate tool can end up saving time as well as making it faster and easier to provide any solution. This chapter provides a classification of various machine learning tools on the following aspects: for non-programmers, for model deployment, for Computer vision, natural language processing, and audio for reinforcement learning and data mining.

2020 ◽  
pp. 1-38
Author(s):  
Amandeep Kaur ◽  
◽  
Anjum Mohammad Aslam ◽  

In this chapter we discuss the core concept of Artificial Intelligence. We define the term of Artificial Intelligence and its interconnected terms such as Machine learning, deep learning, Neural Networks. We describe the concept with the perspective of its usage in the area of business. We further analyze various applications and case studies which can be achieved using Artificial Intelligence and its sub fields. In the area of business already numerous Artificial Intelligence applications are being utilized and will be expected to be utilized more in the future where machines will improve the Artificial Intelligence, Natural language processing, Machine learning abilities of humans in various zones.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2021 ◽  
pp. 002073142110174
Author(s):  
Md Mijanur Rahman ◽  
Fatema Khatun ◽  
Ashik Uzzaman ◽  
Sadia Islam Sami ◽  
Md Al-Amin Bhuiyan ◽  
...  

The novel coronavirus disease (COVID-19) has spread over 219 countries of the globe as a pandemic, creating alarming impacts on health care, socioeconomic environments, and international relationships. The principal objective of the study is to provide the current technological aspects of artificial intelligence (AI) and other relevant technologies and their implications for confronting COVID-19 and preventing the pandemic’s dreadful effects. This article presents AI approaches that have significant contributions in the fields of health care, then highlights and categorizes their applications in confronting COVID-19, such as detection and diagnosis, data analysis and treatment procedures, research and drug development, social control and services, and the prediction of outbreaks. The study addresses the link between the technologies and the epidemics as well as the potential impacts of technology in health care with the introduction of machine learning and natural language processing tools. It is expected that this comprehensive study will support researchers in modeling health care systems and drive further studies in advanced technologies. Finally, we propose future directions in research and conclude that persuasive AI strategies, probabilistic models, and supervised learning are required to tackle future pandemic challenges.


2021 ◽  
Vol 2083 (4) ◽  
pp. 042086
Author(s):  
Yuqi Qin

Abstract Machine learning algorithm is the core of artificial intelligence, is the fundamental way to make computer intelligent, its application in all fields of artificial intelligence. Aiming at the problems of the existing algorithms in the discrete manufacturing industry, this paper proposes a new 0-1 coding method to optimize the learning algorithm, and finally proposes a learning algorithm of “IG type learning only from the best”.


2015 ◽  
Vol 3 (2) ◽  
pp. 115-126 ◽  
Author(s):  
Naresh Babu Bynagari

Artificial Intelligence (AI) is one of the most promising and intriguing innovations of modernity. Its potential is virtually unlimited, from smart music selection in personal gadgets to intelligent analysis of big data and real-time fraud detection and aversion. At the core of the AI philosophy lies an assumption that once a computer system is provided with enough data, it can learn based on that input. The more data is provided, the more sophisticated its learning ability becomes. This feature has acquired the name "machine learning" (ML). The opportunities explored with ML are plentiful today, and one of them is an ability to set up an evolving security system learning from the past cyber-fraud experiences and developing more rigorous fraud detection mechanisms. Read on to learn more about ML, the types and magnitude of fraud evidenced in modern banking, e-commerce, and healthcare, and how ML has become an innovative, timely, and efficient fraud prevention technology.


Author(s):  
E. Grilli ◽  
E. M. Farella ◽  
A. Torresani ◽  
F. Remondino

<p><strong>Abstract.</strong> In the last years, the application of artificial intelligence (Machine Learning and Deep Learning methods) for the classification of 3D point clouds has become an important task in modern 3D documentation and modelling applications. The identification of proper geometric and radiometric features becomes fundamental to classify 2D/3D data correctly. While many studies have been conducted in the geospatial field, the cultural heritage sector is still partly unexplored. In this paper we analyse the efficacy of the geometric covariance features as a support for the classification of Cultural Heritage point clouds. To analyse the impact of the different features calculated on spherical neighbourhoods at various radius sizes, we present results obtained on four different heritage case studies using different features configurations.</p>


Author(s):  
Peter R Slowinski

The core of artificial intelligence (AI) applications is software of one sort or another. But while available data and computing power are important for the recent quantum leap in AI, there would not be any AI without computer programs or software. Therefore, the rise in importance of AI forces us to take—once again—a closer look at software protection through intellectual property (IP) rights, but it also offers us a chance to rethink this protection, and while perhaps not undoing the mistakes of the past, at least to adapt the protection so as not to increase the dysfunctionality that we have come to see in this area of law in recent decades. To be able to establish the best possible way to protect—or not to protect—the software in AI applications, this chapter starts with a short technical description of what AI is, with readers referred to other chapters in this book for a deeper analysis. It continues by identifying those parts of AI applications that constitute software to which legal software protection regimes may be applicable, before outlining those protection regimes, namely copyright and patents. The core part of the chapter analyses potential issues regarding software protection with respect to AI using specific examples from the fields of evolutionary algorithms and of machine learning. Finally, the chapter draws some conclusions regarding the future development of IP regimes with respect to AI.


Author(s):  
Irene Li ◽  
Alexander R. Fabbri ◽  
Robert R. Tung ◽  
Dragomir R. Radev

Recent years have witnessed the rising popularity of Natural Language Processing (NLP) and related fields such as Artificial Intelligence (AI) and Machine Learning (ML). Many online courses and resources are available even for those without a strong background in the field. Often the student is curious about a specific topic but does not quite know where to begin studying. To answer the question of “what should one learn first,”we apply an embedding-based method to learn prerequisite relations for course concepts in the domain of NLP. We introduce LectureBank, a dataset containing 1,352 English lecture files collected from university courses which are each classified according to an existing taxonomy as well as 208 manually-labeled prerequisite relation topics, which is publicly available 1. The dataset will be useful for educational purposes such as lecture preparation and organization as well as applications such as reading list generation. Additionally, we experiment with neural graph-based networks and non-neural classifiers to learn these prerequisite relations from our dataset.


Sign in / Sign up

Export Citation Format

Share Document