shared infrastructure
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 19)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Vol 18 (4) ◽  
pp. 1-23
Author(s):  
Tobias Gysi ◽  
Christoph Müller ◽  
Oleksandr Zinenko ◽  
Stephan Herhut ◽  
Eddie Davis ◽  
...  

Most compilers have a single core intermediate representation (IR) (e.g., LLVM) sometimes complemented with vaguely defined IR-like data structures. This IR is commonly low-level and close to machine instructions. As a result, optimizations relying on domain-specific information are either not possible or require complex analysis to recover the missing information. In contrast, multi-level rewriting instantiates a hierarchy of dialects (IRs), lowers programs level-by-level, and performs code transformations at the most suitable level. We demonstrate the effectiveness of this approach for the weather and climate domain. In particular, we develop a prototype compiler and design stencil- and GPU-specific dialects based on a set of newly introduced design principles. We find that two domain-specific optimizations (500 lines of code) realized on top of LLVM’s extensible MLIR compiler infrastructure suffice to outperform state-of-the-art solutions. In essence, multi-level rewriting promises to herald the age of specialized compilers composed from domain- and target-specific dialects implemented on top of a shared infrastructure.


2021 ◽  
Author(s):  
Vanessa Fairhurst ◽  
Ed Pentz

Inclusive and efficient open research depends on foundational open scholarly infrastructure. It has become increasingly clear that there is a class of shared infrastructure to enable open research that should be open, community-governed, sustainable and trusted by the research community.  However, to date there has been little clarity about how to assess, or even define, open scholarly infrastructure. As services that the scholarly community relies on and are essential to open research have been closed down or sold, it is imperative to understand and assess what constitutes open scholarly infrastructure.   The Principles of Open Scholarly Infrastructure (POSI) were conceived to ensure that the stakeholders of a community-led organization or initiative have a clear say in setting its agenda and priorities, and can carefully close it down and start an alternative if needed.  Join us in conversation with Ed Pentz, Executive Director of Crossref, to find out why the adoption of POSI is so significant for Crossref, how the organization currently meets the principles and what we will strive to do better, what this will mean for the future of Crossref and the wider community, and how you can get involved and learn more. 


2021 ◽  
Vol 150 (4) ◽  
pp. A197-A198
Author(s):  
Aijun Song ◽  
Xiaoyan Hong ◽  
Fumin Zhang ◽  
Zheng Peng ◽  
Zhaohui Wang

2021 ◽  
Vol 13 (16) ◽  
pp. 8996
Author(s):  
Konrad Nübel ◽  
Michael Max Bühler ◽  
Thorsten Jelinek

Twenty-first century infrastructure needs to respond to changing demographics, becoming climate neutral, resilient, and economically affordable, while remaining a driver for development and shared prosperity. However, the infrastructure sector remains one of the least innovative and digitalized, plagued by delays, cost overruns, and benefit shortfalls. The authors assessed trends and barriers in the planning and delivery of infrastructure based on secondary research, qualitative interviews with internationally leading experts, and expert workshops. The analysis concludes that the root-cause of the industry’s problems is the prevailing fragmentation of the infrastructure value chain and a lacking long-term vision for infrastructure. To help overcome these challenges, an integration of the value chain is needed. The authors propose that this could be achieved through a use-case-based, as well as vision and governance-driven creation of federated digital platforms applied to infrastructure projects and outline a concept. Digital platforms enable full-lifecycle participation and responsible governance guided by a shared infrastructure vision. This paper has contributed as policy recommendation to the Group of Twenty (G20) in 2021.


Author(s):  
Konrad Nübel ◽  
Michael Bühler ◽  
Thorsten Jelinek

Twenty-first century infrastructure needs to respond to changing demographics, becoming climate neutral, resilient, and economically affordable, while remaining a driver for development and shared prosperity. However, the infrastructure sector remains one of the least innovative and digitalized, plagued by delays, cost overruns, and benefit shortfalls [1-4]. The root cause is the prevailing fragmentation of the infrastructure value chain [5]. To support overcoming the shortcomings, an integration of the value chain is needed. This could be achieved through a use-cased-based creation of federated digital platforms applied to infrastructure projects. Such digital platforms enable full-lifecycle participation and responsible governance guided by a shared infrastructure vision.


2021 ◽  
Author(s):  
Peter C. Kalverla ◽  
Stef Smeets ◽  
Niels Drost ◽  
Bouwe Andela ◽  
Fakhereh Alidoost ◽  
...  

<p>Scientific progress greatly benefits from the participation of a broad and diverse community. Increasing data volumes put pressure on this scientific ecosystem, limiting the participation in the scientific process to a select group of researchers with access to sufficient storage and compute resources. This is not new.</p><p>To level the playing field for all researchers, a shared infrastructure had to be developed. We know it today as the ESGF. The European contribution to ESGF has been coordinated mainly through the IS-ENES projects. The current infrastructure provides access to the data as well as compute resources. So far, so good.</p><p>The next bottleneck for a smooth scientific process is ease of use. A lot of progress has already been made on standardization of climate model output, so that it is easier to analyse and compare different models. Moreover, a broad range of tools is being developed to better facilitate the processing of large data volumes. The constraint then becomes the ability to navigate this new scientific landscape and to effectively wield the new tools we have at our disposal.</p><p>There is another factor that hampers scientific progress. The increasing complexity of climate analysis workflows makes it difficult to reproduce, reuse, and build upon previous results. Of course it does not help that the main scientific mode of exchange is through journal articles, which are not well suited for sharing workflows. Which brings us to sharing code.</p><p>Code, by its nature, documents a workflow and thereby helps reproducibility. Sharing code is only just starting to take off, as part of a broader development towards a more transparent and reproducible scientific process. Now, interestingly, it is not the scarcity of tools, but rather their abundance that can lead to diverging workflows and poor interoperability.</p><p>The Earth System Model eValuation Tool (ESMValTool) was originally developed as a command line tool for routine evaluation climate models. This tool encourages some degree of standardization by factoring out common operations, while allowing for custom analytics of the pre-processed data. All scripts are bundled with the tool. Over time this has grown into a library of so-called ‘recipes’.</p><p>Recently we have started developing a Python API for the ESMValTool. This allows for interactive exploration, modification, and execution of existing recipes, as well as the creation of new workflows. At the same time, partners in IS-ENES3 are making their infrastructure accessible through JupyterLab. Through the combination of these technologies, researchers have direct access to data and resources, and they can easily re-use existing analysis workflows, all through the comfort of the web browser. During the conference, we will give an overview of the current possibilities, and we would like to encourage the discussion on future developments that are needed for a fruitful scientific process.</p>


Author(s):  
Jeffrey Montes ◽  
Jessy Kate Schingler ◽  
Philip Metzger

2021 ◽  
Author(s):  
Peter C. Kalverla ◽  
Stef Smeets ◽  
Niels Drost ◽  
Bouwe Andela ◽  
Fakhereh Alidoost ◽  
...  

<p>Ease of use can easily become a limiting factor to scientific quality and progress. In order to verify and build upon previous results, the ability to effortlessly access and process increasing data volumes is crucial.</p><p>To level the playing field for all researchers, a shared infrastructure had to be developed. In Europe, this effort is coordinated mainly through the IS-ENES projects. The current infrastructure provides access to the data as well as compute resources. This leaves the tools to easily work with the data as the main obstacle for a smooth scientific process. Interestingly, not the scarcity of tools, but rather their abundance can lead to diverging workflows that hamper reproducibility.</p><p>The Earth System Model eValuation Tool (ESMValTool) was originally developed as a command line tool for routine evaluation of important analytics workflows. This tool encourages some degree of standardization by factoring out common operations, while allowing for custom analytics of the pre-processed data. All scripts are bundled with the tool. Over time this has grown into a library of so-called ‘recipes’.</p><p>In the EUCP project, we are now developing a Python API for the ESMValTool. This allows for interactive exploration, modification, and execution of existing recipes, as well as creation of new analytics. Concomitantly, partners in IS-ENES3 are making their infrastructure accessible through JupyterLab. Through the combination of these technologies, researchers can easily access the data and compute, but also the workflows or methods used by their colleagues - all through the web browser. During the vEGU, we will show how this extended infrastructure can be used to easily reproduce, and build upon, previous results.</p>


2021 ◽  
Vol 18 (4) ◽  
pp. 9-22
Author(s):  
Katarzyna Chałubińska-Jentkiewicz

The issue of acquiring large amounts of data and creating large sets of digital data, and then processing and analyzing them (Big Data) for the needs of generating artificial intelligence (AI) solutions is one of the key challenges to the development of economy and national security. Data have become a resource that will determine the power and geopolitical and geoeconomic position of countries and regions in the 21st century.The layout of data storage and processing in distributed databases has changed in recent years. Since the appearance of hosting services in the range of ICT services, we are talking about a new type of ASP (Applications Service Providers) – provision of the ICT networks as part of an application). Cloud Computing is therefore one of the versions of the ASP services. The ASP guarantees the customer access to a dedicated application running on a server. Cloud Computing, on the other hand, gives the opportunity to use theresources of a shared infrastructure for many users simultaneously (Murphy n.d.). The use of the CC model is more effective in many aspects. Cloud Computing offers the opportunity to use three basic services: data storage in the cloud (cloud storage), applications in the cloud (cloud applications) and computing in the cloud (compute cloud). Website hosting and electronic mail are still the most frequently chosen services in Cloud Computing. The article attempts to explain the responsibility for content stored in the Cloud Computing.


Sign in / Sign up

Export Citation Format

Share Document