Heterogeneous Large-Scale Distributed Systems on Machine Learning

Author(s):  
Karthika Paramasivam ◽  
Prathap M. ◽  
Hussain Sharif

Tensor flow is an interface for communicating AI calculations and a use for performing calculations like this. A calculation communicated using tensor flow can be done with virtually zero changes in a wide range of heterogeneous frameworks, ranging from cell phones, for example, telephones and tablets to massive scale-appropriate structures of many computers and a large number of computational gadgets, for example, GPU cards. The framework is adaptable and can be used to communicate a wide range of calculations, including the preparation and derivation of calculations for deep neural network models, and has been used to guide the analysis and send AI frameworks to more than twelve software engineering zones and different fields, including discourse recognition, sight of PCs, electronic technology, data recovery, everyday language handling, retrieval of spatial data, and discovery of device medication. This chapter demonstrates the tensor flow interface and the interface we worked with at Google.

Author(s):  
Sacha J. van Albada ◽  
Jari Pronold ◽  
Alexander van Meegen ◽  
Markus Diesmann

AbstractWe are entering an age of ‘big’ computational neuroscience, in which neural network models are increasing in size and in numbers of underlying data sets. Consolidating the zoo of models into large-scale models simultaneously consistent with a wide range of data is only possible through the effort of large teams, which can be spread across multiple research institutions. To ensure that computational neuroscientists can build on each other’s work, it is important to make models publicly available as well-documented code. This chapter describes such an open-source model, which relates the connectivity structure of all vision-related cortical areas of the macaque monkey with their resting-state dynamics. We give a brief overview of how to use the executable model specification, which employs NEST as simulation engine, and show its runtime scaling. The solutions found serve as an example for organizing the workflow of future models from the raw experimental data to the visualization of the results, expose the challenges, and give guidance for the construction of an ICT infrastructure for neuroscience.


Symmetry ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 1756
Author(s):  
Zhe Li ◽  
Mieradilijiang Maimaiti ◽  
Jiabao Sheng ◽  
Zunwang Ke ◽  
Wushour Silamu ◽  
...  

The task of dialogue generation has attracted increasing attention due to its diverse downstream applications, such as question-answering systems and chatbots. Recently, the deep neural network (DNN)-based dialogue generation models have achieved superior performance against conventional models utilizing statistical machine learning methods. However, despite that an enormous number of state-of-the-art DNN-based models have been proposed, there lacks detailed empirical comparative analysis for them on the open Chinese corpus. As a result, relevant researchers and engineers might find it hard to get an intuitive understanding of the current research progress. To address this challenge, we conducted an empirical study for state-of-the-art DNN-based dialogue generation models in various Chinese corpora. Specifically, extensive experiments were performed on several well-known single-turn and multi-turn dialogue corpora, including KdConv, Weibo, and Douban, to evaluate a wide range of dialogue generation models that are based on the symmetrical architecture of Seq2Seq, RNNSearch, transformer, generative adversarial nets, and reinforcement learning respectively. Moreover, we paid special attention to the prevalent pre-trained model for the quality of dialogue generation. Their performances were evaluated by four widely-used metrics in this area: BLEU, pseudo, distinct, and rouge. Finally, we report a case study to show example responses generated by these models separately.


2021 ◽  
Author(s):  
Flávio Arthur Oliveira Santos ◽  
Cleber Zanchettin ◽  
Leonardo Nogueira Matos ◽  
Paulo Novais

Abstract Robustness is a significant constraint in machine learning models. The performance of the algorithms must not deteriorate when training and testing with slightly different data. Deep neural network models achieve awe-inspiring results in a wide range of applications of computer vision. Still, in the presence of noise or region occlusion, some models exhibit inaccurate performance even with data handled in training. Besides, some experiments suggest deep learning models sometimes use incorrect parts of the input information to perform inference. Active image augmentation (ADA) is an augmentation method that uses interpretability methods to augment the training data and improve its robustness to face the described problems. Although ADA presented interesting results, its original version only used the vanilla backpropagation interpretability to train the U-Net model. In this work, we propose an extensive experimental analysis of the interpretability method’s impact on ADA. We use five interpretability methods: vanilla backpropagation, guided backpropagation, gradient-weighted class activation mapping (GradCam), guided GradCam and InputXGradient. The results show that all methods achieve similar performance at the ending of training, but when combining ADA with GradCam, the U-Net model presented an impressive fast convergence.


2021 ◽  
Vol 3 (3) ◽  
pp. 208-222
Author(s):  
B. Vivekanandam

In image/video analysis, crowds are actively researched, and their numbers are counted. In the last two decades, many crowd counting algorithms have been developed for a wide range of applications in crisis management systems, large-scale events, workplace safety, and other areas. The precision of neural network research for estimating points is outstanding in computer vision domain. However, the degree of uncertainty in the estimate is rarely indicated. Point estimate is beneficial for measuring uncertainty since it can improve the quality of decisions and predictions. The proposed framework integrates Light weight CNN (LW-CNN) for implementing crowd computing in any public place for delivering higher accuracy in counting. Further, the proposed framework has been trained through various scene analysis such as the full and partial vision of heads in counting. Based on the various scaling sets in the proposed neural network framework, it can easily categorize the partial vision of heads count and it is being counted accurately than other pre-trained neural network models. The proposed framework provides higher accuracy in estimating the headcounts in public places during COVID-19 by consuming less amount of time.


ChemMedChem ◽  
2021 ◽  
Author(s):  
Christoph Grebner ◽  
Hans Matter ◽  
Daniel Kofink ◽  
Jan Wenzel ◽  
Friedemann Schmidt ◽  
...  

1997 ◽  
pp. 931-935 ◽  
Author(s):  
Anders Lansner ◽  
Örjan Ekeberg ◽  
Erik Fransén ◽  
Per Hammarlund ◽  
Tomas Wilhelmsson

Sign in / Sign up

Export Citation Format

Share Document