scholarly journals Towards data interchangeability in paleomagnetism

Author(s):  
Christian Zeeden ◽  
Christian Laag ◽  
Pierre Camps ◽  
Yohan Guyodo ◽  
Ulrich Hambach ◽  
...  

<p>Paleomagnetic data are used in different data formats, adapted to data output of a variety of devices and specific analysis software. This includes widely used openly available software, e.g. PMag.py/MagIC, AGICO/.jr6 & .ged, and PuffinPlot/.ppl. Besides these, individual software and data formats have been established by individual laboratories.</p><p>Here we compare different data formats, identify similarities and create a common and interchangeable data basis. We introduce the idea of a paleomagnetic object (pmob), a simple data table that can include any and all data that would be relevant to the user. We propose a basic nomenclature of abbreviations for the most common paleomagnetic data to merge different data formats. For this purpose, we introduce a set of automatization routines for paleomagnetic data conversion. Our routines bring several data formats into a common data format (pmob), and also allow reversion into selected formats. We propose creating similar routines for all existing paleomagnetic data formats; our suite of computation tools will provide the basis to facilitate the inclusion of further data formats. Furthermore, automatized data processing allows quality assessment of data.</p>

2011 ◽  
Vol 314-316 ◽  
pp. 2253-2258
Author(s):  
Dong Gen Cai ◽  
Tian Rui Zhou

The data processing and conversion plays an important role in RP processes in which the choice of data format determines data processing procedure and method. In this paper, the formats and features of commonly used interface standards such as STL, IGES and STEP are introduced, and the data conversion experiments of CAD models are carried out based on Pro/E system in which the conversion effects of different data formats are compared and analyzed, and the most reasonable data conversion format is proposed.


Author(s):  
Ryan Mackenzie White

Adoption of non-traditional data sources to augment or replace traditional survey vehicles can reduce respondent burden, provide more timely information for policy makers, and gain insights into the society that may otherwise be hidden or missed through traditional survey vehicles. The use of non-traditional data sources imposes several technological challenges due to the volume, velocity and quality of the data. The lack of applied industry-standard data format is a limiting factor which affects the reception, processing and analysis of these data sources. The adoption of a standardized, cross-language, in-memory data format that is organized for efficient analytic operations on modern hardware as a system of record for all administrative data sources has several implications: Enables the efficient use of computational resources related to I/O, processing and storage. Improves data sharing, management and governance capabilities. Increases analyst accessibility to tools, technologies and methods. Statistics Canada developed a framework for selecting computing architecture models for efficient data processing based on benchmark data pipelines representative of common administrative data processes. The data pipelines demonstrate the benefits of a standardized data format for data management, and the efficient use of computational resources. The data pipelines define the preprocessing requirements, data ingestion, data conversion, and metadata modeling, for integration into a common computing architecture. The integration of a standardized data format into a distributed data processing framework based on container technologies is discussed as a general technique to process large volumes of administrative data.


2013 ◽  
Vol 765-767 ◽  
pp. 2485-2488
Author(s):  
Wen Xiu Tang ◽  
Wei Yu Zheng

The commercial power system simulation software data formats are different.The type of data format conversion is a widespread yet to be solved problem.It is likely to achieve the PSD BPA interface transient model description and develop PSD BPA interface/ODM adapter program due to the improvement and expansion of open data model (ODM),using the adapter to achieve BPA Interface ODM/Inter PSS data conversion,finally verify the development of adapter is correct through IEEE9 node and a practical system conversion thus illustrating the feasibility and convenience of data conversion based on ODM..The development of procedures is open in ODM item and is meaningful to the development of other data formats.


2019 ◽  
Vol 214 ◽  
pp. 06029
Author(s):  
Danilo Piparo ◽  
Philippe Canal ◽  
Enrico Guiraud ◽  
Xavier Valls Pla ◽  
Gerardo Ganis ◽  
...  

The Physics programmes of LHC Run III and HL-LHC challenge the HEP community. The volume of data to be handled is unprecedented at every step of the data processing chain: analysis is no exception. Physicists must be provided with first-class analysis tools which are easy to use, exploit bleeding edge hardware technologies and allow to seamlessly express parallelism. This document discusses the declarative analysis engine of ROOT, RDataFrame, and gives details about how it allows to profitably exploit commodity hardware as well as high-end servers and manycore accelerators thanks to the synergy with the existing parallelised ROOT components. Real-life analyses of LHC experiments’ data expressed in terms of RDataFrame are presented, highlighting the programming model provided to express them in a concise and powerful way. The recent developments which make RDataFrame a lightweight data processing framework are described, such as callbacks and I/O capabilities. Finally, the flexibility of RDataFrame and its ability to read data formats other than ROOT’s are characterised, as an example it is discussed how RDataFrame can directly read and analyse LHCb’s raw data format MDF.


Author(s):  
M. Gabriele ◽  
M. Previtali

Abstract. The proprietary software investments in the data integration field are incrementing, and the progresses are visible in the possibility to directly open in a GIS environment a 3D software data format. Still, this is limited to the integration between the proprietary data formats and standards, ArcGIS environment shapefile multipatch and Revit 3D model, by using a proprietary software (ArcGIS). This study takes advantage of the lesson-learnt results in the proprietary data integration field, wanting to replicate a similar result using the IFC open standard, which is not directly openable by a GIS interface and needs to overcome a conversion that in most of the cases leads to semantic and geometric losses. So, an IFC-to-shapefile data conversion was performed, stressing (i) the way information is stored in the attribute table to query the geometries and perform geoprocessing, by (ii) implementing workarounds to keep the Revit instances’ shared parameters in the IFC file, (iii) meanwhile having a high Level of Detail of the HBIM. The research performed the IFC-to-shapefile data conversion through FME (Feature Manipulation Engine), benefitting of the flexibility of the shapefile format and of the IFC’ possibility to keep a high LOD in the export phase. Both allowed to properly query and manage the elements of an HBIM in a GIS (ArcGIS environment), and, using relational attributes table, retrieve the information contained in each Revit instance’ property panel, as the shared parameters that implement the BIM Level of Information (LOI).


2020 ◽  
Vol 245 ◽  
pp. 06016
Author(s):  
Benjamin Edward Krikler ◽  
Olivier Davignon ◽  
Lukasz Kreczko ◽  
Jacob Linacre

The Faster Analysis Software Taskforce (FAST) is a small, European group of HEP researchers that have been investigating and developing modern software approaches to improve HEP analyses. We present here an overview of the key product of this effort: a set of packages that allows a complete implementation of an analysis using almost exclusively YAML files. Serving as an analysis description language (ADL), this toolset builds on top of the evolving technologies from the Scikit-HEP and IRIS-HEP projects as well as industry-standard libraries such as Pandas and Matplotlib. Data processing starts with event-level data (the trees) and can proceed by adding variables, selecting events, performing complex user-defined operations and binning data, as defined in the YAML description. The resulting outputs (the tables) are stored as Pandas dataframes which can be programmatically manipulated and converted to plots or inputs for fitting frameworks. No longer just a proof-of-principle, these tools are now being used in CMS analyses, the LUX-ZEPLIN experiment, and by students on several other experiments. In this talk we will showcase these tools through examples, highlighting how they address the different experiments’ needs, and compare them to other similar approaches.


2015 ◽  
Vol 31 (2) ◽  
pp. 231-247 ◽  
Author(s):  
Matthias Schnetzer ◽  
Franz Astleithner ◽  
Predrag Cetkovic ◽  
Stefan Humer ◽  
Manuela Lenk ◽  
...  

Abstract This article contributes a framework for the quality assessment of imputations within a broader structure to evaluate the quality of register-based data. Four quality-related hyperdimensions examine the data processing from the raw-data level to the final statistics. Our focus lies on the quality assessment of different imputation steps and their influence on overall data quality. We suggest classification rates as a measure of accuracy of imputation and derive several computational approaches.


2021 ◽  
Author(s):  
Jikai Ding ◽  
Shihong Zhang ◽  
et al.

Laboratory methods, paleomagnetic data table, selected paleomagnetic poles, Euler rotation parameters, and supplemental references.<br>


2021 ◽  
Author(s):  
Ivan Efremov ◽  
Roman Veselovskiy

&lt;p&gt;There are many programs for the analysis and visualization of paleomagnetic data, but each of them is good only in a certain use case and does not allow to perform a full cycle of paleomagnetic operations. Therefore, one has to resort to using a number of programs to complete the full path of processing paleomagnetic data. You often have to convert data from one format to another, manually vectorize charts, and generally spend more time and effort than could theoretically be spent. Thus, there is a long overdue need for a universal program capable of fast, convenient and high-quality performance of a full cycle of paleomagnetic operations. A set of programs written by Randy Enkin (Enkin, 1996) for DOS was taken as a time-tested example of such a program. The choice fell on them, since these programs (although they are very outdated) allow performing a full cycle of paleomagnetic operations and do it as conveniently and efficiently as possible for that time.&lt;/p&gt;&lt;p&gt;Our goal is to create a program devoid of all of the above disadvantages and capable of developing indefinitely as modular opensource software by the efforts of all people interested in this.&lt;/p&gt;&lt;p&gt;The result of our work is PMTools &amp;#8211; a cross-platform software for statistical analysis and visualization of paleomagnetic data. PMTools supports all widely used paleomagnetic data formats and allows you to work with them simultaneously. All charts created in PMTools are vector, adapted for direct using in publications and presentations, and can be exported in both vector and raster formats. At the same time, PMTools implements a full cycle of routine paleomagnetic operations: from finding the best-fit directions to calculating the mean paleomagnetic poles. Moreover, all operations can be performed both with a mouse through a graphical user interface and with hotkeys, which significantly speeds up the data analysis process.&amp;#160;&lt;/p&gt;&lt;p&gt;In the near future, PMTools will become a modular open source application, so that each user will be able to add its own modules, thereby expanding the program's functionality.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Enkin, R.J., 1996. A Computer Program Package for Analysis and Presentation of Paleomagnetic Data, Pacific Geoscience Center, Geological Survey of Canada, http://www.pgc.nrcan.gc.ca/tectonic/enkin.htm.&lt;/p&gt;


2003 ◽  
Vol 12 (2) ◽  
Author(s):  
R. L. Riddle ◽  
S. D. Kawaler

AbstractAs the WET moves to CCD systems, we move away from the uniformity of the standard WET photometer into an arena where each system can be radically different. There are many possible CCD photometry systems that can fulfil the requirements of a WET instrument, but each of these will have their own unique native data format. During XCov22, it became readily apparent that the WET requires a defined data format for all CCD data that arrives at HQ. This paper describes the proposed format for the next generation of WET data; the final version will be the default format for XQED, the new photometry package discussed elsewhere in these proceedings.


Sign in / Sign up

Export Citation Format

Share Document