scholarly journals Design Exploration of Machine Learning Data-Flows onto Heterogeneous Reconfigurable Hardware

2020 ◽  
Author(s):  
Westerley Oliveira ◽  
Michael Canesche ◽  
Lucas Reis ◽  
José Nacif ◽  
Ricardo Ferreira

Machine/Deep learning applications are currently the center of the attention of both industry and academia, turning these applications acceleration a very relevant research topic. Acceleration comes in different flavors, including parallelizing routines on a GPU, FPGA, or CGRA. In this work, we explore the placement and routing of Machine Learning applications dataflow graphs onto three heterogeneous CGRA architectures. We compare our results with the homogeneous case and with one of the state-of-the-art tools for placement and routing (P&R). Our algorithm executed, on average, 52% faster than Versatile Place&Routing (VPR) 8.1. Furthermore, a heterogeneous architecture reduces the cost without losing performance in 76% of the cases.

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Zhikuan Zhao ◽  
Jack K. Fitzsimons ◽  
Patrick Rebentrost ◽  
Vedran Dunjko ◽  
Joseph F. Fitzsimons

AbstractMachine learning has recently emerged as a fruitful area for finding potential quantum computational advantage. Many of the quantum-enhanced machine learning algorithms critically hinge upon the ability to efficiently produce states proportional to high-dimensional data points stored in a quantum accessible memory. Even given query access to exponentially many entries stored in a database, the construction of which is considered a one-off overhead, it has been argued that the cost of preparing such amplitude-encoded states may offset any exponential quantum advantage. Here we prove using smoothed analysis that if the data analysis algorithm is robust against small entry-wise input perturbation, state preparation can always be achieved with constant queries. This criterion is typically satisfied in realistic machine learning applications, where input data is subjective to moderate noise. Our results are equally applicable to the recent seminal progress in quantum-inspired algorithms, where specially constructed databases suffice for polylogarithmic classical algorithm in low-rank cases. The consequence of our finding is that for the purpose of practical machine learning, polylogarithmic processing time is possible under a general and flexible input model with quantum algorithms or quantum-inspired classical algorithms in the low-rank cases.


2019 ◽  
Vol 212 (1) ◽  
pp. 26-37 ◽  
Author(s):  
Eyal Lotan ◽  
Rajan Jain ◽  
Narges Razavian ◽  
Girish M. Fatterpekar ◽  
Yvonne W. Lui

2020 ◽  
Author(s):  
Zhengjing Ma ◽  
Gang Mei

Landslides are one of the most critical categories of natural disasters worldwide and induce severely destructive outcomes to human life and the overall economic system. To reduce its negative effects, landslides prevention has become an urgent task, which includes investigating landslide-related information and predicting potential landslides. Machine learning is a state-of-the-art analytics tool that has been widely used in landslides prevention. This paper presents a comprehensive survey of relevant research on machine learning applied in landslides prevention, mainly focusing on (1) landslides detection based on images, (2) landslides susceptibility assessment, and (3) the development of landslide warning systems. Moreover, this paper discusses the current challenges and potential opportunities in the application of machine learning algorithms for landslides prevention.


2006 ◽  
Vol 35 (3) ◽  
Author(s):  
Arūnas Žvironas ◽  
Egidijus Kazanavičius

The digital signal processing in general case may be implemented on multi-channel structures. In most cases such structures have a heterogeneous architecture where the Kahn network and correlation are used to process the data flows. In this paper the methodology of the design of heterogeneous systems is presented. The methodology was tested on the design of the real devices controlling large data flows. Multi-channel structures were used to estimate the influence of the number of channels on the speed of data and the cost of the task, and to estimate an optimal number of channels.


2020 ◽  
Vol 35 (33) ◽  
pp. 2043005
Author(s):  
Fernanda Psihas ◽  
Micah Groh ◽  
Christopher Tunnell ◽  
Karl Warburton

Neutrino experiments study the least understood of the Standard Model particles by observing their direct interactions with matter or searching for ultra-rare signals. The study of neutrinos typically requires overcoming large backgrounds, elusive signals, and small statistics. The introduction of state-of-the-art machine learning tools to solve analysis tasks has made major impacts to these challenges in neutrino experiments across the board. Machine learning algorithms have become an integral tool of neutrino physics, and their development is of great importance to the capabilities of next generation experiments. An understanding of the roadblocks, both human and computational, and the challenges that still exist in the application of these techniques is critical to their proper and beneficial utilization for physics applications. This review presents the current status of machine learning applications for neutrino physics in terms of the challenges and opportunities that are at the intersection between these two fields.


2021 ◽  
Vol 4 (1) ◽  
pp. 23
Author(s):  
Usman Naseem ◽  
Matloob Khushi ◽  
Shah Khalid Khan ◽  
Kamran Shaukat ◽  
Mohammad Ali Moni

An enormous amount of clinical free-text information, such as pathology reports, progress reports, clinical notes and discharge summaries have been collected at hospitals and medical care clinics. These data provide an opportunity of developing many useful machine learning applications if the data could be transferred into a learn-able structure with appropriate labels for supervised learning. The annotation of this data has to be performed by qualified clinical experts, hence, limiting the use of this data due to the high cost of annotation. An underutilised technique of machine learning that can label new data called active learning (AL) is a promising candidate to address the high cost of the label the data. AL has been successfully applied to labelling speech recognition and text classification, however, there is a lack of literature investigating its use for clinical purposes. We performed a comparative investigation of various AL techniques using ML and deep learning (DL)-based strategies on three unique biomedical datasets. We investigated random sampling (RS), least confidence (LC), informative diversity and density (IDD), margin and maximum representativeness-diversity (MRD) AL query strategies. Our experiments show that AL has the potential to significantly reducing the cost of manual labelling. Furthermore, pre-labelling performed using AL expediates the labelling process by reducing the time required for labelling.


Author(s):  
Myeong Sang Yu

The revolutionary development of artificial intelligence (AI) such as machine learning and deep learning have been one of the most important technology in many parts of industry, and also enhance huge changes in health care. The big data obtained from electrical medical records and digitalized images accelerated the application of AI technologies in medical fields. Machine learning techniques can deal with the complexity of big data which is difficult to apply traditional statistics. Recently, the deep learning techniques including convolutional neural network have been considered as a promising machine learning technique in medical imaging applications. In the era of precision medicine, otolaryngologists need to understand the potentialities, pitfalls and limitations of AI technology, and try to find opportunities to collaborate with data scientists. This article briefly introduce the basic concepts of machine learning and its techniques, and reviewed the current works on machine learning applications in the field of otolaryngology and rhinology.


2021 ◽  
Vol 2022 (1) ◽  
pp. 148-165
Author(s):  
Thomas Cilloni ◽  
Wei Wang ◽  
Charles Walter ◽  
Charles Fleming

Abstract Facial recognition tools are becoming exceptionally accurate in identifying people from images. However, this comes at the cost of privacy for users of online services with photo management (e.g. social media platforms). Particularly troubling is the ability to leverage unsupervised learning to recognize faces even when the user has not labeled their images. In this paper we propose Ulixes, a strategy to generate visually non-invasive facial noise masks that yield adversarial examples, preventing the formation of identifiable user clusters in the embedding space of facial encoders. This is applicable even when a user is unmasked and labeled images are available online. We demonstrate the effectiveness of Ulixes by showing that various classification and clustering methods cannot reliably label the adversarial examples we generate. We also study the effects of Ulixes in various black-box settings and compare it to the current state of the art in adversarial machine learning. Finally, we challenge the effectiveness of Ulixes against adversarially trained models and show that it is robust to countermeasures.


Sign in / Sign up

Export Citation Format

Share Document