scholarly journals Combining high-performance hardware, cloud computing, and deep learning frameworks to accelerate physical simulations: probing the Hopfield network

2020 ◽  
Vol 41 (3) ◽  
pp. 035802
Author(s):  
Vaibhav S Vavilala
2020 ◽  
Vol 67 ◽  
pp. 285-325
Author(s):  
William Cohen ◽  
Fan Yang ◽  
Kathryn Rivard Mazaitis

We present an implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neural-network infrastructure such as Tensorflow or Theano. This leads to a close integration of probabilistic logical reasoning with deep-learning infrastructure: in particular, it enables high-performance deep learning frameworks to be used for tuning the parameters of a probabilistic logic. The integration with these frameworks enables use of GPU-based parallel processors for inference and learning, making TensorLog the first highly parallellizable probabilistic logic. Experimental results show that TensorLog scales to problems involving hundreds of thousands of knowledge-base triples and tens of thousands of examples.


Author(s):  
Kavita Srivastava

The steep rise in autonomous systems and the internet of things in recent years has influenced the way in which computation has performed. With built-in AI (artificial intelligence) in IoT and cyber-physical systems, the need for high-performance computing has emerged. Cloud computing is no longer sufficient for the sensor-driven systems which continuously keep on collecting data from the environment. The sensor-based systems such as autonomous vehicles require analysis of data and predictions in real-time which is not possible only with the centralized cloud. This scenario has given rise to a new computing paradigm called edge computing. Edge computing requires the storage of data, analysis, and prediction performed on the network edge as opposed to a cloud server thereby enabling quick response and less storage overhead. The intelligence at the edge can be obtained through deep learning. This chapter contains information about various deep learning frameworks, hardware, and systems for edge computing and examples of deep neural network training using the Caffe 2 framework.


2016 ◽  
Vol 11 (1) ◽  
pp. 72-80
Author(s):  
O.V. Darintsev ◽  
A.B. Migranov

In article one of possible approaches to synthezis of group control of mobile robots which is based on use of cloud computing is considered. Distinctive feature of the offered techniques is adequate reflection of specifics of a scope and the robots of tasks solved by group in architecture of control-information systems, methods of the organization of information exchange, etc. The approach offered by authors allows to increase reliability and robustness of collectives of robots, to lower requirements to airborne computers when saving summary high performance in general.


2020 ◽  
Vol 26 ◽  
Author(s):  
Xiaoping Min ◽  
Fengqing Lu ◽  
Chunyan Li

: Enhancer-promoter interactions (EPIs) in the human genome are of great significance to transcriptional regulation which tightly controls gene expression. Identification of EPIs can help us better deciphering gene regulation and understanding disease mechanisms. However, experimental methods to identify EPIs are constrained by the fund, time and manpower while computational methods using DNA sequences and genomic features are viable alternatives. Deep learning methods have shown promising prospects in classification and efforts that have been utilized to identify EPIs. In this survey, we specifically focus on sequence-based deep learning methods and conduct a comprehensive review of the literatures of them. We first briefly introduce existing sequence-based frameworks on EPIs prediction and their technique details. After that, we elaborate on the dataset, pre-processing means and evaluation strategies. Finally, we discuss the challenges these methods are confronted with and suggest several future opportunities.


2020 ◽  
Vol 13 (3) ◽  
pp. 313-318 ◽  
Author(s):  
Dhanapal Angamuthu ◽  
Nithyanandam Pandian

<P>Background: The cloud computing is the modern trend in high-performance computing. Cloud computing becomes very popular due to its characteristic of available anywhere, elasticity, ease of use, cost-effectiveness, etc. Though the cloud grants various benefits, it has associated issues and challenges to prevent the organizations to adopt the cloud. </P><P> Objective: The objective of this paper is to cover the several perspectives of Cloud Computing. This includes a basic definition of cloud, classification of the cloud based on Delivery and Deployment Model. The broad classification of the issues and challenges faced by the organization to adopt the cloud computing model are explored. Examples for the broad classification are Data Related issues in the cloud, Service availability related issues in cloud, etc. The detailed sub-classifications of each of the issues and challenges discussed. The example sub-classification of the Data Related issues in cloud shall be further classified into Data Security issues, Data Integrity issue, Data location issue, Multitenancy issues, etc. This paper also covers the typical problem of vendor lock-in issue. This article analyzed and described the various possible unique insider attacks in the cloud environment. </P><P> Results: The guideline and recommendations for the different issues and challenges are discussed. The most importantly the potential research areas in the cloud domain are explored. </P><P> Conclusion: This paper discussed the details on cloud computing, classifications and the several issues and challenges faced in adopting the cloud. The guideline and recommendations for issues and challenges are covered. The potential research areas in the cloud domain are captured. This helps the researchers, academicians and industries to focus and address the current challenges faced by the customers.</P>


Author(s):  
Xiangbing Zhao ◽  
Jianhui Zhou

With the advent of the computer network era, people like to think in deeper ways and methods. In addition, the power information network is facing the problem of information leakage. The research of power information network intrusion detection is helpful to prevent the intrusion and attack of bad factors, ensure the safety of information, and protect state secrets and personal privacy. In this paper, through the NRIDS model and network data analysis method, based on deep learning and cloud computing, the demand analysis of the real-time intrusion detection system for the power information network is carried out. The advantages and disadvantages of this kind of message capture mechanism are compared, and then a high-speed article capture mechanism is designed based on the DPDK research. Since cloud computing and power information networks are the most commonly used tools and ways for us to obtain information in our daily lives, our lives will be difficult to carry out without cloud computing and power information networks, so we must do a good job to ensure the security of network information network intrusion detection and defense measures.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 223
Author(s):  
Yen-Ling Tai ◽  
Shin-Jhe Huang ◽  
Chien-Chang Chen ◽  
Henry Horng-Shing Lu

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.


Sign in / Sign up

Export Citation Format

Share Document