scholarly journals Experiences with distributed computing for meteorological applications: grid computing and cloud computing

2015 ◽  
Vol 8 (7) ◽  
pp. 2067-2078 ◽  
Author(s):  
F. Oesterle ◽  
S. Ostermann ◽  
R. Prodan ◽  
G. J. Mayr

Abstract. Experiences with three practical meteorological applications with different characteristics are used to highlight the core computer science aspects and applicability of distributed computing to meteorology. Through presenting cloud and grid computing this paper shows use case scenarios fitting a wide range of meteorological applications from operational to research studies. The paper concludes that distributed computing complements and extends existing high performance computing concepts and allows for simple, powerful and cost-effective access to computing capacity.

2015 ◽  
Vol 8 (2) ◽  
pp. 1171-1199 ◽  
Author(s):  
F. Schüller ◽  
S. Ostermann ◽  
R. Prodan ◽  
G. J. Mayr

Abstract. Experiences with three practical meteorological applications with different characteristics are used to highlight the core computer science aspects and applicability of distributed computing to meteorology. Presenting Cloud and Grid computing this paper shows use case scenarios fitting a wide range of meteorological applications from operational to research studies. The paper concludes that distributed computing complements and extends existing high performance computing concepts and allows for simple, powerful and cost effective access to computing capacity.


Author(s):  
Dimosthenis Kyriazis ◽  
Andreas Menychtas ◽  
Konstantinos Tserpes ◽  
Theodoros Athanaileas ◽  
Theodora Varvarigou

A constantly increasing number of applications from various scientific fields are finding their way towards adopting Grid technologies in order to take advantage of their capabilities: the advent of Grid environments made feasible the solution of computational intensive problems in a reliable and cost-effective way. This book chapter focuses on presenting and describing how high performance computing in general and specifically Grids can be applied in biomedicine. The latter poses a number of requirements, both computational and sharing / networking ones. In this context, we will describe in detail how Grid environments can fulfill the aforementioned requirements. Furthermore, this book chapter includes a set of cases and scenarios of biomedical applications in Grids, in order to highlight the added-value of the distributed computing in the specific domain.


Author(s):  
Adam Brian Nulty

Introduction: The current generation of 3D printers are lighter, cheaper, and smaller, making them more accessible to the chairside digital dentist than ever before. 3D printers in general in the industrial and chairside setting can work with various types of materials including, metals, ceramics, and polymers. Evidence presented in many studies show that an ideal material used for dental restorations is characterised by several properties related to durability, cost-effectiveness, and high performance. This review is the second part in a 3D Printing series that looks at the literature on material science and applications for these materials in 3D printing as well as a discussion on the potential further development and future evolution in 3D printing materials. Conclusions: Current materials in 3D printing provide a wide range of possibilities for providing more predictable workflows as well as improving efficiency through less wasteful additive manufacturing in CAD/CAM procedures. Incorporating a 3D printer and a digital workflow into a dental practice is challenging but the wide range of manufacturing options and materials available mean that the dentist should be well prepared to treat patients with a more predictable and cost effective treatment pathway. As 3D printing continues to become a commonplace addition to chair side dental clinics, the evolution of these materials, in particular reinforced PMMA, resin incorporating zirconia and glass reinforced polymers offer increased speed and improved aesthetics that will likely replace subtractive manufacturing milling machines for most procedures.


Author(s):  
Atta ur Rehman Khan ◽  
Abdul Nasir Khan

Mobile devices are gaining high popularity due to support for a wide range of applications. However, the mobile devices are resource constrained and many applications require high resources. To cater to this issue, the researchers envision usage of mobile cloud computing technology which offers high performance computing, execution of resource intensive applications, and energy efficiency. This chapter highlights importance of mobile devices, high performance applications, and the computing challenges of mobile devices. It also provides a brief introduction to mobile cloud computing technology, its architecture, types of mobile applications, computation offloading process, effective offloading challenges, and high performance computing application on mobile devises that are enabled by mobile cloud computing technology.


Author(s):  
Qiang Guan ◽  
Nathan DeBardeleben ◽  
Sean Blanchard ◽  
Song Fu ◽  
Claude H. Davis IV ◽  
...  

As the high performance computing (HPC) community continues to push towards exascale computing, HPC applications of today are only affected by soft errors to a small degree but we expect that this will become a more serious issue as HPC systems grow. We propose F-SEFI, a Fine-grained Soft Error Fault Injector, as a tool for profiling software robustness against soft errors. We utilize soft error injection to mimic the impact of errors on logic circuit behavior. Leveraging the open source virtual machine hypervisor QEMU, F-SEFI enables users to modify emulated machine instructions to introduce soft errors. F-SEFI can control what application, which sub-function, when and how to inject soft errors with different granularities, without interference to other applications that share the same environment. We demonstrate use cases of F-SEFI on several benchmark applications with different characteristics to show how data corruption can propagate to incorrect results. The findings from the fault injection campaign can be used for designing robust software and power-efficient hardware.


Author(s):  
Jagdish Chandra Patni

Powerful computational capabilities and resource availability at a low cost is the utmost demand for high performance computing. The resources for computing can viewed as the edges of an interconnected grid. It can attain the capabilities of grid computing by balancing the load at various levels. Since the nature of resources are heterogeneous and distributed geographically, the grid computing paradigm in its original form cannot be used to meet the requirements, so it can use the capabilities of the cloud and other technologies to achieve the goal. Resource heterogeneity makes grid computing more dynamic and challenging. Therefore, in this article the problem of scalability, heterogeneity and adaptability of grid computing is discussed with a perspective of providing high computing, load balancing and availability of resources.


Author(s):  
Stefan Westerlund ◽  
Christopher Harris

AbstractThe latest generation of radio astronomy interferometers will conduct all sky surveys with data products consisting of petabytes of spectral line data. Traditional approaches to identifying and parameterising the astrophysical sources within this data will not scale to datasets of this magnitude, since the performance of workstations will not keep up with the real-time generation of data. For this reason, it is necessary to employ high performance computing systems consisting of a large number of processors connected by a high-bandwidth network. In order to make use of such supercomputers substantial modifications must be made to serial source finding code. To ease the transition, this work presents the Scalable Source Finder Framework, a framework providing storage access, networking communication and data composition functionality, which can support a wide range of source finding algorithms provided they can be applied to subsets of the entire image. Additionally, the Parallel Gaussian Source Finder was implemented using SSoFF, utilising Gaussian filters, thresholding, and local statistics. PGSF was able to search on a 256GB simulated dataset in under 24 minutes, significantly less than the 8 to 12 hour observation that would generate such a dataset.


Sign in / Sign up

Export Citation Format

Share Document