scholarly journals TriCCo v1.0.0 – a cubulation-based method for computing connected components on triangular grids

2021 ◽  
Author(s):  
Aiko Voigt ◽  
Petra Schwer ◽  
Noam von Rotberg ◽  
Nicole Knopf

Abstract. We present a new method to identify connected components on a triangular grid. Triangular grids are, for example, used in atmosphere and climate models to discretize the horizontal dimension. Because they are unstructured, neighbor relations are not self-evident and identifying connected components is challenging. Our method addresses this challenge by involving the mathematical tool of cubulation. We show that cubulation allows one to map the 2-d cells of the triangular grid onto the vertices of the 3-d cells of a cubic grid. The latter is structured and so connected components can be readily identified on the cubic grid by previously developed software packages. An advantage is that the cubulation, i.e., the mapping between the triangular and cubic grids, needs to be computed only once, which should be benifical for analysing many data fields for the same grid.We further implement our method in a python package that we name TriCCo and that is made available via pypi and gitlab. We document the package, demonstrate its application using cloud data from the ICON atmosphere model, and characterize its computational performance. This shows that TriCCo is ready for triangular grids with 100,000 cells, but that its speed and memory requirements need to be improved to analyse larger grids.

Author(s):  
Suliman A. Gargoum ◽  
James C. Koch ◽  
Karim El-Basyouny

The number of light poles and their position (in terms of density and offset off the roadside) have significant impacts on the safe operation of highways. In current practice, inventory of such information is performed in periodic site visits, which are tedious and time consuming. This makes inventory and health monitoring of poles at a network level extremely challenging. To relieve the burden associated with manual inventory of poles, this paper proposes a novel algorithm which can automatically obtain such information from remotely sensing data. The proposed algorithm works by first tiling point cloud data collected using light detection and ranging (LiDAR) technology into manageable data tiles of fixed dimensions. The data are voxelized and attributes for each data voxel are calculated to classify them into ground and nonground points. Connected components labeling is then used to perform 3D clustering of the data voxels. Further clustering is performed using a density-based clustering to combine connected components of the same object. The final step involves classifying different objects into poles and non-poles based on a set of decision rules related to the geometric properties of the clusters. The proposed algorithm was tested on a 4 km rural highway segment in Alberta, Canada, which had substantial variation in its vertical alignment. The algorithm was accurate in detecting nonground objects, including poles. Moreover, the results also highlight the importance of considering the length of the highway and its terrain when detecting nonground objects from LiDAR.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Miha Moškon

Abstract Background Even though several computational methods for rhythmicity detection and analysis of biological data have been proposed in recent years, classical trigonometric regression based on cosinor still has several advantages over these methods and is still widely used. Different software packages for cosinor-based rhythmometry exist, but lack certain functionalities and require data in different, non-unified input formats. Results We present CosinorPy, a Python implementation of cosinor-based methods for rhythmicity detection and analysis. CosinorPy merges and extends the functionalities of existing cosinor packages. It supports the analysis of rhythmic data using single- or multi-component cosinor models, automatic selection of the best model, population-mean cosinor regression, and differential rhythmicity assessment. Moreover, it implements functions that can be used in a design of experiments, a synthetic data generator, and import and export of data in different formats. Conclusion CosinorPy is an easy-to-use Python package for straightforward detection and analysis of rhythmicity requiring minimal statistical knowledge, and produces publication-ready figures. Its code, examples, and documentation are available to download from https://github.com/mmoskon/CosinorPy. CosinorPy can be installed manually or by using pip, the package manager for Python packages. The implementation reported in this paper corresponds to the software release v1.1.


2017 ◽  
Vol 10 (1) ◽  
pp. 19-34 ◽  
Author(s):  
Venkatramani Balaji ◽  
Eric Maisonnave ◽  
Niki Zadeh ◽  
Bryan N. Lawrence ◽  
Joachim Biercamp ◽  
...  

Abstract. A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).


2016 ◽  
Author(s):  
V. Balaji ◽  
E. Maisonnave ◽  
N. Zadeh ◽  
B. N. Lawrence ◽  
J. Biercamp ◽  
...  

Abstract. A climate model represents a multitude of processes on a variety of time and space scales; a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory bound. Such weak-scaling, I/O and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth System) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform, and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. We present results for these measures for a diverse suite of models from several modeling centres, and propose to use these measures as a basis for a CPMIP, a computational performance MIP.


2013 ◽  
Vol 94 (7) ◽  
pp. 1031-1049 ◽  
Author(s):  
C. J. Stubenrauch ◽  
W. B. Rossow ◽  
S. Kinne ◽  
S. Ackerman ◽  
G. Cesana ◽  
...  

Clouds cover about 70% of Earth's surface and play a dominant role in the energy and water cycle of our planet. Only satellite observations provide a continuous survey of the state of the atmosphere over the entire globe and across the wide range of spatial and temporal scales that compose weather and climate variability. Satellite cloud data records now exceed more than 25 years; however, climate data records must be compiled from different satellite datasets and can exhibit systematic biases. Questions therefore arise as to the accuracy and limitations of the various sensors and retrieval methods. The Global Energy and Water Cycle Experiment (GEWEX) Cloud Assessment, initiated in 2005 by the GEWEX Radiation Panel (GEWEX Data and Assessment Panel since 2011), provides the first coordinated intercomparison of publicly available, standard global cloud products (gridded monthly statistics) retrieved from measurements of multispectral imagers (some with multiangle view and polarization capabilities), IR sounders, and lidar. Cloud properties under study include cloud amount, cloud height (in terms of pressure, temperature, or altitude), cloud thermodynamic phase, and cloud radiative and bulk microphysical properties (optical depth or emissivity, effective particle radius, and water path). Differences in average cloud properties, especially in the amount of high-level clouds, are mostly explained by the inherent instrument measurement capability for detecting and/or identifying optically thin cirrus, especially when overlying low-level clouds. The study of long-term variations with these datasets requires consideration of many factors. The monthly gridded database presented here facilitates further assessments, climate studies, and the evaluation of climate models.


2013 ◽  
Vol 331 ◽  
pp. 631-635
Author(s):  
Ci Zhang ◽  
Guo Fan Hu ◽  
Xu Bing Chen

In reverse engineering, data pre-processing has played an increasingly important role for rebuilding the original 3D model. However, it is usually complex, time-consuming, and difficult to realize, as there are huge amounts of redundant 3D data existed in the gained point cloud. To find a solution for this issue, point cloud data processing and streamlining technologies are reviewed firstly. Secondly, a novel pre-processing approach is proposed in three steps: point cloud registration, regional 3D triangular mesh construction and point cloud filtering. And then, the projected hexagonal area and the closest projected point are defined. At last, a parabolic antenna model is employed as a case study. After pre-processing, the number of points are decreased from 4,066,282 to 449,806 under the constraint of triangular grid size h equaling to 2mm, i.e. about 1/9 size of the original point cloud. The result demonstrates its feasibility and efficiency.


2020 ◽  
Vol 13 (9) ◽  
pp. 4435-4442
Author(s):  
Patrick Obin Sturm ◽  
Anthony S. Wexler

Abstract. Large air quality models and large climate models simulate the physical and chemical properties of the ocean, land surface, and/or atmosphere to predict atmospheric composition, energy balance and the future of our planet. All of these models employ some form of operator splitting, also called the method of fractional steps, in their structure, which enables each physical or chemical process to be simulated in a separate operator or module within the overall model. In this structure, each of the modules calculates property changes for a fixed period of time; that is, property values are passed into the module, which calculates how they change for a period of time and then returns the new property values, all in round-robin between the various modules of the model. Some of these modules require the vast majority of the computer resources consumed by the entire model, so increasing their computational efficiency can either improve the model's computational performance, enable more realistic physical or chemical representations in the module, or a combination of these two. Recent efforts have attempted to replace these modules with ones that use machine learning tools to memorize the input–output relationships of the most time-consuming modules. One shortcoming of some of the original modules and their machine-learned replacements is lack of adherence to conservation principles that are essential to model performance. In this work, we derive a mathematical framework for machine-learned replacements that conserves properties – say mass, atoms, or energy – to machine precision. This framework can be used to develop machine-learned operator replacements in environmental models.


1983 ◽  
Vol 64 (7) ◽  
pp. 779-784 ◽  
Author(s):  
R. A. Schiffer ◽  
W. B. Rossow

The International Satellite Cloud Climatology Project (ISCCP) has been approved as the first project of the World Climate Research Programme (WCRP) and will begin its operational phase in July 1983. Its basic objective is to collect and analyze satellite radiance data to infer the global distribution of cloud radiative properties in order to improve the modeling of cloud effects on climate. ISCCP has two components, operational and research. The operational component takes advantage of the global coverage provided by the current and planned international array of geostationary and polar-orbiting meteorological satellites during the 1980s to produce a five-year global satellite radiance and cloud data set. The main and most important characteristic of these data will be their globally uniform coverage of various indices of cloud cover. The research component of ISCCP will coordinate studies to validate the climatology, to improve cloud analysis algorithms, to improve modeling of cloud effects in climate models, and to investigate the role of clouds in the atmosphere's radiation budget and hydrologic cycle. Validation will involve comparative measurements at a number of test areas selected as representative of major (or difficult) cloud types and meteorological conditions. Complimentary efforts within the framework of WCRP will promote the use of the resulting ISCCP data sets in climate research.


2014 ◽  
Vol 7 (5) ◽  
pp. 1443-1457 ◽  
Author(s):  
K. Beswick ◽  
D. Baumgardner ◽  
M. Gallagher ◽  
A. Volz-Thomas ◽  
P. Nedelec ◽  
...  

Abstract. A compact (500 cm3), lightweight (500 g), near-field, single particle backscattering optical spectrometer is described that mounts flush with the skin of an aircraft and measures the concentration and optical equivalent diameter of particles from 5 to 75 μm. The backscatter cloud probe (BCP) was designed as a real-time qualitative cloud detector primarily for data quality control of trace gas instruments developed for the climate monitoring instrument packages that are being installed on commercial passenger aircraft as part of the European Union In-Service Aircraft for a Global Observing System (IAGOS) program (http://www.iagos.org/). Subsequent evaluations of the BCP measurements on a number of research aircraft, however, have revealed it capable of delivering quantitative particle data products including size distributions, liquid-water content and other information on cloud properties. We demonstrate the instrument's capability for delivering useful long-term climatological, as well as aviation performance information, across a wide range of environmental conditions. The BCP has been evaluated by comparing its measurements with those from other cloud particle spectrometers on research aircraft and several BCPs are currently flying on commercial A340/A330 Airbus passenger airliners. The design and calibration of the BCP is described in this article, along with an evaluation of measurements made on the research and commercial aircraft. Preliminary results from more than 7000 h of airborne measurements by the BCP on two Airbus A340s operating on routine global traffic routes (one Lufthansa, the other China Airlines) show that more than 340 h of cloud data have been recorded at normal cruise altitudes (> 10 km) and more than 40% of the > 1200 flights were through clouds at some point between takeoff and landing. These data are a valuable contribution to databases of cloud properties, including sub-visible cirrus, in the upper troposphere and useful for validating satellite retrievals of cloud water and effective radius; in addition, providing a broader, geographically and climatologically relevant view of cloud microphysical variability that is useful for improving parameterizations of clouds in climate models. Moreover, they are also useful for monitoring the vertical climatology of clouds over airports, especially those over megacities where pollution emissions may be impacting local and regional climate.


2015 ◽  
Vol 08 (05) ◽  
pp. 1530001 ◽  
Author(s):  
Sonia Farhana Nimmy ◽  
M. S. Kamal

The next generation sequencing (NGS) is an important process which assures inexpensive organization of vast size of raw sequence dataset over any traditional sequencing systems or methods. Various aspects of NGS such as template preparation, sequencing imaging and genome alignment and assembly outline the genome sequencing and alignment. Consequently, de Bruijn graph (dBG) is an important mathematical tool that graphically analyzes how the orientations are constructed in groups of nucleotides. Basically, dBG describes the formation of the genome segments in circular iterative fashions. Some pivotal dBG-based de novo algorithms and software packages such as T-IDBA, Oases, IDBA-tran, Euler, Velvet, ABySS, AllPaths, SOAPde novo and SOAPde novo2 are illustrated in this paper. Consequently, overlap layout consensus (OLC) graph-based algorithms also play vital role in NGS assembly. Some important OLC-based algorithms such as MIRA3, CABOG, Newbler, Edena, Mosaik and SHORTY are portrayed in this paper. It has been experimented that greedy graph-based algorithms and software packages are also vital for proper genome dataset assembly. A few algorithms named SSAKE, SHARCGS and VCAKE help to perform proper genome sequencing.


Sign in / Sign up

Export Citation Format

Share Document