gpu clusters
Recently Published Documents


TOTAL DOCUMENTS

243
(FIVE YEARS 57)

H-INDEX

24
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Marko van Treeck ◽  
Didem Cifci ◽  
Narmin Ghaffari Laleh ◽  
Oliver Lester Saldanha ◽  
Chiara Maria Lavinia Loeffler ◽  
...  

The interpretation of digitized histopathology images has been transformed thanks to artificial intelligence (AI). End-to-end AI algorithms can infer high-level features directly from raw image data, extending the capabilities of human experts. In particular, AI can predict tumor subtypes, genetic mutations and gene expression directly from hematoxylin and eosin (H&E) stained pathology slides. However, existing end-to-end AI workflows are poorly standardized and not easily adaptable to new tasks. Here, we introduce DeepMed, a Python library for predicting any high-level attribute directly from histopathological whole slide images alone, or from images coupled with additional meta-data (https://github.com/KatherLab/deepmed). Unlike earlier computational pipelines, DeepMed is highly developer-friendly: its structure is modular and separates preprocessing, training, deployment, statistics, and visualization in such a way that any one of these processes can be altered without affecting the others. Also, DeepMed scales easily from local use on laptop computers to multi-GPU clusters in cloud computing services and therefore can be used for teaching, prototyping and for large-scale applications. Finally, DeepMed is user-friendly and allows researchers to easily test multiple hypotheses in a single dataset (via cross-validation) or in multiple datasets (via external validation). Here, we demonstrate and document DeepMed's abilities to predict molecular alterations, histopathological subtypes and molecular features from routine histopathology images, using a large benchmark dataset which we release publicly. In summary, DeepMed is a fully integrated and broadly applicable end-to-end AI pipeline for the biomedical research community.


Author(s):  
Giuseppe M. J. Barca ◽  
Melisa Alkan ◽  
Jorge L. Galvez-Vallejo ◽  
David L. Poole ◽  
Alistair P. Rendell ◽  
...  

2021 ◽  
Author(s):  
Deepak Narayanan ◽  
Mohammad Shoeybi ◽  
Jared Casper ◽  
Patrick LeGresley ◽  
Mostofa Patwary ◽  
...  

2021 ◽  
Vol 81 (10) ◽  
Author(s):  
Eduardo I. Bribián ◽  
Jorge Dasilva Golán ◽  
Margarita García Pérez ◽  
Alberto Ramos

AbstractIn this paper we explore a finite volume renormalization scheme that combines three main ingredients: a coupling based on the gradient flow, the use of twisted boundary conditions and a particular asymmetric geometry, that for SU(N) gauge theories consists on a hypercubic box of size $$l^2 \times (Nl)^2$$ l 2 × ( N l ) 2 , a choice motivated by the study of volume independence in large N gauge theories. We argue that this scheme has several advantages that make it particularly suited for precision determinations of the strong coupling, among them translational invariance, an analytic expansion in the coupling and a reduced memory footprint with respect to standard simulations on symmetric lattices, allowing for a more efficient use of current GPU clusters. We test this scheme numerically with a determination of the $$\Lambda $$ Λ parameter in the SU(3) pure gauge theory. We show that the use of an asymmetric geometry has no significant impact in the size of scaling violations, obtaining a value $$\Lambda _{\overline{\mathrm{MS}}}\sqrt{8 t_0} =0.603(17)$$ Λ MS ¯ 8 t 0 = 0.603 ( 17 ) in good agreement with the existing literature. The role of topology freezing, that is relevant for the determination of the coupling in this particular scheme and for large N applications, is discussed in detail.


Author(s):  
Dajun Chang ◽  
Li Li ◽  
Ying Chang ◽  
Zhangquan Qiao

AbstractNowadays, with the rapid growth of data volume, massive data has become one of the factors that plague the development of enterprises. How to effectively process data and reduce the concurrency pressure of data access has become the driving force for the continuous development of big data solutions. This article mainly studies the MapReduce parallel computing framework based on multiple data fusion sensors and GPU clusters. This experimental environment uses a Hadoop fully distributed cluster environment, and the entire programming of the single-source shortest path algorithm based on MapReduce is implemented in Java language. 8 ordinary physical machines are used to build a fully distributed cluster, and the configuration environment of each node is basically the same. The MapReduce framework divides the request job into several mapping tasks and assigns them to different computing nodes. After the mapping process, a certain intermediate file that is consistent with the final file format is generated. At this time, the system will generate several reduction tasks and distribute these files to different cluster nodes for execution. This experiment will verify the changes in the running time of the PSON algorithm when the size of the test data set gradually increases while keeping the hardware level and software configuration of the Hadoop platform unchanged. When the number of computing nodes increases from 2 to 4, the running time is significantly reduced. When the number of computing nodes continues to increase, the reduction in running time will become less and less significant. The results show that NESTOR can complete the basic workflow of MapReduce, and simplifies the process of user development of GPU positive tree order, which has a significant speedup for applications with large amounts of calculations.


2021 ◽  
Author(s):  
Lizhi Zhang ◽  
Zhiquan Lai ◽  
Shengwei Li ◽  
Yu Tang ◽  
Feng Liu ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document