scholarly journals SIFTing through satellite imagery with the Satellite Information Familiarization Tool

2020 ◽  
pp. 121-132
Author(s):  
Jordan J. Gerth ◽  
Raymond K. Garcia ◽  
David J. Hoese ◽  
Scott S. Lindstrom ◽  
Timothy J. Schmit

The Satellite Information Familiarization Tool (SIFT) is an open-source, multi-platform graphical user interface designed to easily display spectral and temporal sequences of geostationary satellite imagery. The Advanced Baseline Imager (ABI) and Advanced Himawari Imager (AHI) on the “new generation” of geostationary satellites collect imagery with a spatial resolution four times greater than previously available. Combined with the increased number of spectral bands and more frequent imaging, the new series imagers collect approximately 60 times more data. Given the resulting large file sizes, the development of SIFT is a multiyear effort to make those satellite imagery data files accessible to the broad community of students, scientists, and operational meteorologists. To achieve the objective of releasing software that provides an intuitive user experience to complement optimum performance on consumer-grade computers, SIFT was built to leverage modern graphics processing units (GPUs) through existing open-source Python packages, and runs on the three major operating systems: Windows, Mac, and Linux. The United States National Weather Service funded the development of SIFT to help enhance the satellite meteorology acumen of their operational meteorologists. SIFT has basic image visualization capabilities and enables the fluid animation and interrogation of satellite images, creation of Red-Green-Blue (RGB) composites and algebraic combinations of multiple spectral bands, and comparison of imagery with numerical weather prediction output. Open for community development, SIFT users and features continue to grow. SIFT is freely available with short tutorials and a user guide online. The mandate for the software, its development, realized applications, and envisioned role in science and training are explained.

2021 ◽  
Author(s):  
John Taylor ◽  
Pablo Larraonndo ◽  
Bronis de Supinski

Abstract Society has benefited enormously from the continuous advancement in numerical weather prediction that has occurred over many decades driven by a combination of outstanding scientific, computational and technological breakthroughs. Here we demonstrate that data driven methods are now positioned to contribute to the next wave of major advances in atmospheric science. We show that data driven models can predict important meteorological quantities of interest to society such as global high resolution precipitation fields (0.25 degrees) and can deliver accurate forecasts of the future state of the atmosphere without prior knowledge of the laws of physics and chemistry. We also show how these data driven methods can be scaled to run on super-computers with up to 1024 modern graphics processing units (GPU) and beyond resulting in rapid training of data driven models, thus supporting a cycle of rapid research and innovation. Taken together, these two results illustrate the significant potential of data driven methods to advance atmospheric science and operational weather forecasting.


2014 ◽  
Vol 24 (01) ◽  
pp. 1450003 ◽  
Author(s):  
Xavier Lapillonne ◽  
Oliver Fuhrer

For many scientific applications, Graphics Processing Units (GPUs) can be an interesting alternative to conventional CPUs as they can deliver higher memory bandwidth and computing power. While it is conceivable to re-write the most execution time intensive parts using a low-level API for accelerator programming, it may not be feasible to do it for the entire application. But, having only selected parts of the application running on the GPU requires repetitively transferring data between the GPU and the host CPU, which may lead to a serious performance penalty. In this paper we assess the potential of compiler directives, based on the OpenACC standard, for porting large parts of code and thus achieving a full GPU implementation. As an illustrative and relevant example, we consider the climate and numerical weather prediction code COSMO (Consortium for Small Scale Modeling) and focus on the physical parametrizations, a part of the code which describes all physical processes not accounted for by the fundamental equations of atmospheric motion. We show, by porting three of the dominant parametrization schemes, the radiation, microphysics and turbulence parametrizations, that compiler directives are an efficient tool both in terms of final execution time as well as implementation effort. Compiler directives enable to port large sections of the existing code with minor modifications while still allowing for further optimizations for the most performance critical parts. With the example of the radiation parametrization, which contains the solution of a block tri-diagonal linear system, the required code modifications and key optimizations are discussed in detail. Performance tests for the three physical parametrizations show a speedup of between 3× and 7× for execution time obtained on a GPU and on a multi-core CPU of an equivalent generation.


Author(s):  
Martin C. W. Leong ◽  
Kit-Hang Lee ◽  
Bowen P. Y. Kwan ◽  
Yui-Lun Ng ◽  
Zhiyu Liu ◽  
...  

Abstract Purpose Intensity-based image registration has been proven essential in many applications accredited to its unparalleled ability to resolve image misalignments. However, long registration time for image realignment prohibits its use in intra-operative navigation systems. There has been much work on accelerating the registration process by improving the algorithm’s robustness, but the innate computation required by the registration algorithm has been unresolved. Methods Intensity-based registration methods involve operations with high arithmetic load and memory access demand, which supposes to be reduced by graphics processing units (GPUs). Although GPUs are widespread and affordable, there is a lack of open-source GPU implementations optimized for non-rigid image registration. This paper demonstrates performance-aware programming techniques, which involves systematic exploitation of GPU features, by implementing the diffeomorphic log-demons algorithm. Results By resolving the pinpointed computation bottlenecks on GPU, our implementation of diffeomorphic log-demons on Nvidia GTX Titan X GPU has achieved ~ 95 times speed-up compared to the CPU and registered a 1.3-M voxel image in 286 ms. Even for large 37-M voxel images, our implementation is able to register in 8.56 s, which attained ~ 258 times speed-up. Our solution involves effective employment of GPU computation units, memory, and data bandwidth to resolve computation bottlenecks. Conclusion The computation bottlenecks in diffeomorphic log-demons are pinpointed, analyzed, and resolved using various GPU performance-aware programming techniques. The proposed fast computation on basic image operations not only enhances the computation of diffeomorphic log-demons, but is also potentially extended to speed up many other intensity-based approaches. Our implementation is open-source on GitHub at https://bit.ly/2PYZxQz.


Author(s):  
John A Taylor ◽  
Pablo Larraondo ◽  
Bronis R de Supinski

Society has benefited enormously from the continuous advancement in numerical weather prediction that has occurred over many decades driven by a combination of outstanding scientific, computational and technological breakthroughs. Here, we demonstrate that data-driven methods are now positioned to contribute to the next wave of major advances in atmospheric science. We show that data-driven models can predict important meteorological quantities of interest to society such as global high resolution precipitation fields (0.25°) and can deliver accurate forecasts of the future state of the atmosphere without prior knowledge of the laws of physics and chemistry. We also show how these data-driven methods can be scaled to run on supercomputers with up to 1024 modern graphics processing units and beyond resulting in rapid training of data-driven models, thus supporting a cycle of rapid research and innovation. Taken together, these two results illustrate the significant potential of data-driven methods to advance atmospheric science and operational weather forecasting.


2019 ◽  
Vol 35 (17) ◽  
pp. 3181-3183 ◽  
Author(s):  
Patryk Orzechowski ◽  
Jason H Moore

Abstract Motivation In this paper, we present an open source package with the latest release of Evolutionary-based BIClustering (EBIC), a next-generation biclustering algorithm for mining genetic data. The major contribution of this paper is adding a full support for multiple graphics processing units (GPUs) support, which makes it possible to run efficiently large genomic data mining analyses. Multiple enhancements to the first release of the algorithm include integration with R and Bioconductor, and an option to exclude missing values from the analysis. Results Evolutionary-based BIClustering was applied to datasets of different sizes, including a large DNA methylation dataset with 436 444 rows. For the largest dataset we observed over 6.6-fold speedup in computation time on a cluster of eight GPUs compared to running the method on a single GPU. This proves high scalability of the method. Availability and implementation The latest version of EBIC could be downloaded from http://github.com/EpistasisLab/ebic. Installation and usage instructions are also available online. Supplementary information Supplementary data are available at Bioinformatics online.


Atmosphere ◽  
2019 ◽  
Vol 10 (4) ◽  
pp. 177 ◽  
Author(s):  
Keith Hutchison ◽  
Barbara Iisager

Clouds are critical in mechanisms that impact climate sensitivity studies, air quality and solar energy forecasts, and a host of aerodrome flight and safety operations. However, cloud forecast accuracies are seldom described in performance statistics provided with most numerical weather prediction (NWP) and climate models. A possible explanation for this apparent omission involves the difficulty in developing cloud ground truth databases for the verification of large-scale numerical simulations. Therefore, the process of developing highly accurate cloud cover fraction truth data from manually generated cloud/no-cloud analyses of multispectral satellite imagery is the focus of this article. The procedures exploit the phenomenology to maximize cloud signatures in a variety of remotely sensed satellite spectral bands in order to create accurate binary cloud/no-cloud analyses. These manual analyses become cloud cover fraction truth after being mapped to the grids of the target datasets. The process is demonstrated by examining all clouds in a NAM dataset along with a 24 h WRF cloud forecast field generated from them. Quantitative comparisons with the cloud truth data for the case study show that clouds in the NAM data are under-specified while the WRF model greatly over-predicts them. It is concluded that highly accurate cloud cover truth data are valuable for assessing cloud model input and output datasets and their creation requires the collection of satellite imagery in a minimum set of spectral bands. It is advocated that these remote sensing requirements be considered for inclusion into the designs of future environmental satellite systems.


2008 ◽  
Vol 18 (04) ◽  
pp. 531-548 ◽  
Author(s):  
JOHN MICHALAKES ◽  
MANISH VACHHARAJANI

Weather and climate prediction software has enjoyed the benefits of exponentially increasing processor power for almost 50 years. Even with the advent of large-scale parallelism in weather models, much of the performance increase has come from increasing processor speed rather than increased parallelism. This free ride is nearly over. Recent results also indicate that simply increasing the use of large-scale parallelism will prove ineffective for many scenarios where strong scaling is required. We present an alternative method of scaling model performance by exploiting emerging architectures using the fine-grain parallelism once used in vector machines. The paper shows the promise of this approach by demonstrating a nearly 10 × speedup for a computationally intensive portion of the Weather Research and Forecast (WRF) model on a variety of NVIDIA Graphics Processing Units (GPU). This change alone speeds up the whole weather model by 1.23×.


Author(s):  
Josie E. Rodriguez Condia ◽  
Pierpaolo Narducci ◽  
Matteo Sonza Reorda ◽  
Luca Sterpone

AbstractGeneral-purpose graphics processing units (GPGPUs) are extensively used in high-performance computing. However, it is well known that these devices’ reliability may be limited by the rising of faults at the hardware level. This work introduces a flexible solution to detect and mitigate permanent faults affecting the execution units in these parallel devices. The proposed solution is based on adding some spare modules to perform two in-field operations: detecting and mitigating faults. The solution takes advantage of the regularity of the execution units in the device to avoid significant design changes and reduce the overhead. The proposed solution was evaluated in terms of reliability improvement and area, performance, and power overhead costs. For this purpose, we resorted to a micro-architectural open-source GPGPU model (FlexGripPlus). Experimental results show that the proposed solution can extend the reliability by up to 57%, with overhead costs lower than 2% and 8% in area and power, respectively.


Sign in / Sign up

Export Citation Format

Share Document