ScudWare: Software Infrastructure for SmartShadow

Author(s):  
Zhaohui Wu ◽  
Gang Pan
Author(s):  
V. P. Koryachko ◽  
◽  
D. A. Perepelkin ◽  
M. A. Ivanchikova ◽  
V. S. Byshov ◽  
...  

Author(s):  
Jordan Musser ◽  
Ann S Almgren ◽  
William D Fullmer ◽  
Oscar Antepara ◽  
John B Bell ◽  
...  

MFIX-Exa is a computational fluid dynamics–discrete element model (CFD-DEM) code designed to run efficiently on current and next-generation supercomputing architectures. MFIX-Exa combines the CFD-DEM expertise embodied in the MFIX code—which was developed at NETL and is used widely in academia and industry—with the modern software framework, AMReX, developed at LBNL. The fundamental physics models follow those of the original MFIX, but the combination of new algorithmic approaches and a new software infrastructure will enable MFIX-Exa to leverage future exascale machines to optimize the modeling and design of multiphase chemical reactors.


1994 ◽  
Vol 3 (2) ◽  
pp. 111-129 ◽  
Author(s):  
William Bricken ◽  
Geoffrey Coco

The Virtual Environment Operating Shell (veos) was developed at University of Washington's Human Interface Technology Laboratory as software infrastructure for the lab's research in virtual environments. veos was designed from scratch to provide a comprehensive and unified management facility to support generation of, interaction with, and maintenance of virtual environments. VEOS emphasizes rapid prototyping, heterogeneous distributed computing, and portability. We discuss the design, philosophy and implementation of veos in depth. Within the Kernel, the shared database transformations are pattern-directed, communications are asynchronous, and the programmer's interface is LISP. An entity-based metaphor extends object-oriented programming to systems-oriented programming. Entities provide first-class environments and biological programming constructs such as perceive, react, and persist. The organization, structure, and programming of entities are discussed in detail. The article concludes with a description of the applications that have contributed to the iterative refinement of the VEOS software.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 2926 ◽  
Author(s):  
Lisa M. Breckels ◽  
Claire M. Mulvey ◽  
Kathryn S. Lilley ◽  
Laurent Gatto

Spatial proteomics is the systematic study of protein sub-cellular localisation. In this workflow, we describe the analysis of a typical quantitative mass spectrometry-based spatial proteomics experiment using the MSnbase and pRoloc Bioconductor package suite. To walk the user through the computational pipeline, we use a recently published experiment predicting protein sub-cellular localisation in pluripotent embryonic mouse stem cells. We describe the software infrastructure at hand, importing and processing data, quality control, sub-cellular marker definition, visualisation and interactive exploration. We then demonstrate the application and interpretation of statistical learning methods, including novelty detection using semi-supervised learning, classification, clustering and transfer learning and conclude the pipeline with data export. The workflow is aimed at beginners who are familiar with proteomics in general and spatial proteomics in particular.


2021 ◽  
Vol 251 ◽  
pp. 04009
Author(s):  
Roel Aaij ◽  
Daniel Hugo Cámpora Pérez ◽  
Tommaso Colombo ◽  
Conor Fitzpatrick ◽  
Vladimir Vava Gligorov ◽  
...  

The upgraded LHCb detector, due to start datataking in 2022, will have to process an average data rate of 4 TB/s in real time. Because LHCb’s physics objectives require that the full detector information for every LHC bunch crossing is read out and made available for real-time processing, this bandwidth challenge is equivalent to that of the ATLAS and CMS HL-LHC software read-out, but deliverable five years earlier. Over the past six years, the LHCb collaboration has undertaken a bottom-up rewrite of its software infrastructure, pattern recognition, and selection algorithms to make them better able to efficiently exploit modern highly parallel computing architectures. We review the impact of this reoptimization on the energy efficiency of the realtime processing software and hardware which will be used for the upgrade of the LHCb detector. We also review the impact of the decision to adopt a hybrid computing architecture consisting of GPUs and CPUs for the real-time part of LHCb’s future data processing. We discuss the implications of these results on how LHCb’s real-time power requirements may evolve in the future, particularly in the context of a planned second upgrade of the detector.


Author(s):  
Roman Malo

XML (Extensible Markup Language) represents one of flexible platforms for processing enterprise documents. Its simple syntax and powerful software infrastructure for processing this type of documents is a guarantee for high interoperability of individual documents. XML is today one of technologies influencing all aspects of ICT area.In the paper questions and basic principles of reusing XML-based documents are described in the field of enterprise documents. If we use XML databases or XML data types for storing these types of documents then partial redundancy could be expected due to possible documents’ similarity. This similarity can be found especially in documents’ structure and also in documents’ content and its elimination is necessary part of data optimization.The main idea of the paper is focused to possibilities how to think about dividing complex XML docu­ments into independent fragments that can be used as standalone documents and how to process them.Conclusions could be applied within software tools working with XML-based structured data and documents as document management systems or content management systems.


Sign in / Sign up

Export Citation Format

Share Document