scholarly journals The Institutionalisation of Digital Public Health: Lessons Learned from the COVID-19 App

2020 ◽  
Vol 11 (2) ◽  
pp. 228-235 ◽  
Author(s):  
Ciro CATTUTO ◽  
Alessandro SPINA

Amid the outbreak of the SARS-CoV-2 pandemic, there has been a call to use innovative digital tools for the purpose of protecting public health. There are a number of proposals to embed digital solutions into the regulatory strategies adopted by public authorities to control the spread of the coronavirus more effectively. They range from algorithms to detect population movements by using telecommunications data to the use of artificial intelligence and high-performance computing power to detect patterns in the spread of the virus. However, the use of a mobile phone application for contact tracing is certainly the most popular.

Author(s):  
Bonjun Koo ◽  
Manoj Jegannathan ◽  
Johyun Kyoung ◽  
Ho-Joon Lim

Abstract In this study, direct time domain offloading simulations are conducted without condensing the metocean data using High Performance Computing (HPC). With rapidly growing computing power, from increased CPU speeds and parallel processing capability, the direct time domain simulation for offloading analyses has become a practical option. For instance, 3-hour time domain simulations, covering the entire service life (e.g. 100,000 simulations for 35 years) of a floating platform, can now be conducted within a day. The simulation results provide realistic offloading operational time windows which consider both offloading operation sequence (i.e. berthing, connection, offloading duration and disconnection) and required criteria (i.e. relative responses, loads on hawser and flow line, etc.). The direct time domain offloading analyses improve the prediction of offloading operability, the sizing of the FPSO tank capacity, and the shuttle tanker selection. In addition, this method enables accurate evaluations of the economic feasibility for field development using FPSOs.


2020 ◽  
Author(s):  
Hamza Ali Imran

Applications like Big Data, Machine Learning, Deep Learning and even other Engineering and Scientific research requires a lot of computing power; making High-Performance Computing (HPC) an important field. But access to Supercomputers is out of range from the majority. Nowadays Supercomputers are actually clusters of computers usually made-up of commodity hardware. Such clusters are called Beowulf Clusters. The history of which goes back to 1994 when NASA built a Supercomputer by creating a cluster of commodity hardware. In recent times a lot of effort has been done in making HPC Clusters of even single board computers (SBCs). Although the creation of clusters of commodity hardware is possible but is a cumbersome task. Moreover, the maintenance of such systems is also difficult and requires special expertise and time. The concept of cloud is to provide on-demand resources that can be services, platform or even infrastructure and this is done by sharing a big resource pool. Cloud computing has resolved problems like maintenance of hardware and requirement of having expertise in networking etc. An effort is made of bringing concepts from cloud computing to HPC in order to get benefits of cloud. The main target is to create a system which can develop a capability of providing computing power as a service which to further be referred to as Supercomputer as a service. A prototype was made using Raspberry Pi (RPi) 3B and 3B+ Single Board Computers. The reason for using RPi boards was increasing popularity of ARM processors in the field of HPC


Author(s):  
Jeremy Cohen ◽  
John Darlington

As computing power continues to grow and high performance computing use increases, ever bigger scientific experiments and tasks can be carried out. However, the management of the computing power necessary to support these ever growing tasks is getting more and more difficult. Increased power consumption, heat generation and space costs for the larger numbers of resources that are required can make local hosting of resources too expensive. Emergence of utility computing platforms offers a solution. We present our recent work to develop an update to our computational markets environment for support of application deployment and brokering across multiple utility computing environments. We develop a prototype to demonstrate the potential benefits of such an environment and look at the longer term changes in the use of computing that might be enabled by such developments.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sergio Botelho Junior ◽  
Bill O’Gorman

Purpose This paper aims to explore high performance computing (HPC) in the context of the South East region of Ireland, which hosts a publicly available HPC infrastructure, by identifying whether companies, especially small and medium enterprises (SMEs), are using, or are prepared to use, HPC to improve their business processes, expansion and sustainability. The result of the analysis provides region-specific guidelines that are meant to improve the HPC landscape in the region. The lessons learned from this research may apply to other similar, and developing, European regions. Design/methodology/approach This paper explores the use of HPC in the context of the South East region of Ireland and examines whether companies, especially SMEs, are benefiting from the use of publicly available HPC infrastructure in the region. This paper also provides a set of recommendations, of a policy nature, and required actions to increase HPC usage, based on the reality of the region. Therefore, the first step in the process was to understand the HPC landscape in the South East region of Ireland. Interviews were conducted with higher education institute (HEI) staff who were knowledgeable about the HPC infrastructure of their institutes and also about whether collaboration between the HEIs and businesses from the same region exists. The interview findings allowed the proposal of region-specific guidelines to improve the HPC landscape and collaboration in the region. The guidelines were analysed and refined in a focus group with key regional stakeholders from academia, industry and government, who have experience and expertise in high-technology transfer processes happening in the region. Findings The findings of the current study strongly suggest that HPC usage by SMEs in the South East region of Ireland is still incipient; and that HPC knowledge is currently inadequately transferred from the HEI hosting the HPC infrastructure to public and private sector organisations based in the region. The findings also demonstrate that there are no courses or training programmes available dedicated to HPC and that the level of collaboration between the HEI hosting the HPC infrastructure and industry in the region is minimal as regards HPC usage and projects. Therefore, there is a need to put specific targeted policies and actions, both from a regional government and HEI perspectives, in place to encourage SMEs to optimise their processes by using HPC. Originality/value This research is unique as it provides customised region-specific recommendations (RSR) and feasible actions to encourage industry, especially SMEs, to use HPC and collaborate around it. The literature review identified that there is a lack of studies that can inform policymakers to include HPC in their innovation agenda. Previous research studies specifically focussing on HPC policies are even more scarce. Most of the existing research pertaining to HPC focusses on the technical aspect of HPC; therefore, this research and paper bring a new dimension to existing HPC research. Even though this research was focussed on the South East of Ireland region, the model that generated the RSRs can be extrapolated and applied to other regions that need to develop their HPC landscape and the use of HPC among SMEs in their respective regions.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1029
Author(s):  
Anabi Hilary Kelechi ◽  
Mohammed H. Alsharif ◽  
Okpe Jonah Bameyi ◽  
Paul Joan Ezra ◽  
Iorshase Kator Joseph ◽  
...  

Power-consuming entities such as high performance computing (HPC) sites and large data centers are growing with the advance in information technology. In business, HPC is used to enhance the product delivery time, reduce the production cost, and decrease the time it takes to develop a new product. Today’s high level of computing power from supercomputers comes at the expense of consuming large amounts of electric power. It is necessary to consider reducing the energy required by the computing systems and the resources needed to operate these computing systems to minimize the energy utilized by HPC entities. The database could improve system energy efficiency by sampling all the components’ power consumption at regular intervals and the information contained in a database. The information stored in the database will serve as input data for energy-efficiency optimization. More so, device workload information and different usage metrics are stored in the database. There has been strong momentum in the area of artificial intelligence (AI) as a tool for optimizing and processing automation by leveraging on already existing information. This paper discusses ideas for improving energy efficiency for HPC using AI.


Sign in / Sign up

Export Citation Format

Share Document