scholarly journals Machine Learning in Python: Main Developments and Technology Trends in Data Science, Machine Learning, and Artificial Intelligence

Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 193 ◽  
Author(s):  
Sebastian Raschka ◽  
Joshua Patterson ◽  
Corey Nolet

Smarter applications are making better use of the insights gleaned from data, having an impact on every industry and research discipline. At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action. Deep neural networks, along with advancements in classical machine learning and scalable general-purpose graphics processing unit (GPU) computing, have become critical components of artificial intelligence, enabling many of these astounding breakthroughs and lowering the barrier to adoption. Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and clean high-level APIs. This survey offers insight into the field of machine learning with Python, taking a tour through important topics to identify some of the core hardware and software paradigms that have enabled it. We cover widely-used libraries and concepts, collected together for holistic comparison, with the goal of educating the reader and driving the field of Python machine learning forward.

2013 ◽  
Vol 753-755 ◽  
pp. 2731-2735
Author(s):  
Wei Cao ◽  
Zheng Hua Wang ◽  
Chuan Fu Xu

The graphics processing unit (GPU) has evolved from configurable graphics processor to a powerful engine for high performance computer. In this paper, we describe the graphics pipeline of GPU, and introduce the history and evolution of GPU architecture. We also provide a summary of software environments used on GPU, from graphics APIs to non-graphics APIs. At last, we present the GPU computing in computational fluid dynamics applications, including the GPGPU computing for Navier-Stokes equations methods and the GPGPU computing for Lattice Boltzmann method.


2012 ◽  
Vol 2012 ◽  
pp. 1-15 ◽  
Author(s):  
Ilia Lebedev ◽  
Christopher Fletcher ◽  
Shaoyi Cheng ◽  
James Martin ◽  
Austin Doupnik ◽  
...  

We present a highly productive approach to hardware design based on a many-core microarchitectural template used to implement compute-bound applications expressed in a high-level data-parallel language such as OpenCL. The template is customized on a per-application basis via a range of high-level parameters such as the interconnect topology or processing element architecture. The key benefits of this approach are that it (i) allows programmers to express parallelism through an API defined in a high-level programming language, (ii) supports coarse-grained multithreading and fine-grained threading while permitting bit-level resource control, and (iii) reduces the effort required to repurpose the system for different algorithms or different applications. We compare template-driven design to both full-custom and programmable approaches by studying implementations of a compute-bound data-parallel Bayesian graph inference algorithm across several candidate platforms. Specifically, we examine a range of template-based implementations on both FPGA and ASIC platforms and compare each against full custom designs. Throughout this study, we use a general-purpose graphics processing unit (GPGPU) implementation as a performance and area baseline. We show that our approach, similar in productivity to programmable approaches such as GPGPU applications, yields implementations with performance approaching that of full-custom designs on both FPGA and ASIC platforms.


Author(s):  
K. Bhargavi ◽  
Sathish Babu B.

The GPUs (Graphics Processing Unit) were mainly used to speed up computation intensive high performance computing applications. There are several tools and technologies available to perform general purpose computationally intensive application. This chapter primarily discusses about GPU parallelism, applications, probable challenges and also highlights some of the GPU computing platforms, which includes CUDA, OpenCL (Open Computing Language), OpenMPC (Open MP extended for CUDA), MPI (Message Passing Interface), OpenACC (Open Accelerator), DirectCompute, and C++ AMP (C++ Accelerated Massive Parallelism). Each of these platforms is discussed briefly along with their advantages and disadvantages.


2020 ◽  
Vol 7 (1) ◽  
pp. 2-3
Author(s):  
Shadi Saleh

Deep learning and machine learning innovations are at the core of the ongoing revolution in Artificial Intelligence for the interpretation and analysis of multimedia data. The convergence of large-scale datasets and more affordable Graphics Processing Unit (GPU) hardware has enabled the development of neural networks for data analysis problems that were previously handled by traditional handcrafted features. Several deep learning architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short Term Memory (LSTM)/Gated Recurrent Unit (GRU), Deep Believe Networks (DBN), and Deep Stacking Networks (DSNs) have been used with new open source software and libraries options to shape an entirely new scenario in computer vision processing.


2021 ◽  
Vol 2083 (4) ◽  
pp. 042086
Author(s):  
Yuqi Qin

Abstract Machine learning algorithm is the core of artificial intelligence, is the fundamental way to make computer intelligent, its application in all fields of artificial intelligence. Aiming at the problems of the existing algorithms in the discrete manufacturing industry, this paper proposes a new 0-1 coding method to optimize the learning algorithm, and finally proposes a learning algorithm of “IG type learning only from the best”.


2015 ◽  
Vol 3 (2) ◽  
pp. 115-126 ◽  
Author(s):  
Naresh Babu Bynagari

Artificial Intelligence (AI) is one of the most promising and intriguing innovations of modernity. Its potential is virtually unlimited, from smart music selection in personal gadgets to intelligent analysis of big data and real-time fraud detection and aversion. At the core of the AI philosophy lies an assumption that once a computer system is provided with enough data, it can learn based on that input. The more data is provided, the more sophisticated its learning ability becomes. This feature has acquired the name "machine learning" (ML). The opportunities explored with ML are plentiful today, and one of them is an ability to set up an evolving security system learning from the past cyber-fraud experiences and developing more rigorous fraud detection mechanisms. Read on to learn more about ML, the types and magnitude of fraud evidenced in modern banking, e-commerce, and healthcare, and how ML has become an innovative, timely, and efficient fraud prevention technology.


2018 ◽  
Vol 15 (3) ◽  
pp. 497-498 ◽  
Author(s):  
Ruth C. Carlos ◽  
Charles E. Kahn ◽  
Safwan Halabi

2021 ◽  
Author(s):  
Neeraj Mohan ◽  
Ruchi Singla ◽  
Priyanka Kaushal ◽  
Seifedine Kadry

2020 ◽  
pp. 87-94
Author(s):  
Pooja Sharma ◽  

Artificial intelligence and machine learning, the two iterations of automation are based on the data, small or large. The larger the data, the more effective an AI or machine learning tool will be. The opposite holds the opposite iteration. With a larger pool of data, the large businesses and multinational corporations have effectively been building, developing and adopting refined AI and machine learning based decision systems. The contention of this chapter is to explore if the small businesses with small data in hands are well-off to use and adopt AI and machine learning based tools for their day to day business operations.


Sign in / Sign up

Export Citation Format

Share Document