Digitalization of the reporting of key comparisons for radionuclide metrology

Author(s):  
Romain Maximilien Coulon ◽  
Sammy Courte ◽  
Steven Judge ◽  
Carine Michotte ◽  
Manuel Nonis

Abstract The Bureau International des Poids et Mesures (BIPM) operates an international reference system (the SIR) to compare primary standards of radioactivity realized by National Metrology Institutes (NMIs). Recently, the way of managing data relating to this system has been redesigned. The new model is fully integrated into the SI digital transformation initiated by the metrology community. The new approach automates the production of reports on the results from key comparison exercises for publication in the Key Comparison DataBase (KCDB), aiming to reduce the time needed to prepare reports without impacting quality. In operation for a year, the new system has produced 12 comparison reports within deadlines at a quality that meets the needs of the stakeholders in radionuclide metrology. The database and the software are controlled using the states-of-the-art Git version control system. In addition, thanks to the machine-readable database it produces, it paves the way for more digital data exchanges meeting the FAIR principles and directly accessible through a new Application Programming Interface (API) that is under development.

2019 ◽  
Vol 46 (8) ◽  
pp. 622-638
Author(s):  
Joachim Schöpfel ◽  
Dominic Farace ◽  
Hélène Prost ◽  
Antonella Zane

Data papers have been defined as scholarly journal publications whose primary purpose is to describe research data. Our survey provides more insights about the environment of data papers, i.e., disciplines, publishers and business models, and about their structure, length, formats, metadata, and licensing. Data papers are a product of the emerging ecosystem of data-driven open science. They contribute to the FAIR principles for research data management. However, the boundaries with other categories of academic publishing are partly blurred. Data papers are (can be) generated automatically and are potentially machine-readable. Data papers are essentially information, i.e., description of data, but also partly contribute to the generation of knowledge and data on its own. Part of the new ecosystem of open and data-driven science, data papers and data journals are an interesting and relevant object for the assessment and understanding of the transition of the former system of academic publishing.


Author(s):  
Mollie Claypool ◽  

The paper ascribes to a belief that architecture should be wholly digital – from the scale of the micron and particle to the brick, beam and building, from design to fabrication or construction. This embodies a fundamental and disruptive shift in architecture and design thinking that is unique to the project images included, enabling design to become more inclusive, participatory and open-source. Architecture that is wholly digital requires a radical rethinking of existing design and building practices. Thes projects described in this paper each develops a set of parts in relationship to a specific digital fabrication technology. These parts are defined as open-ended, universal and versatile building blocks, with a digital logic of connectivity. Each physical part has a malefemale connection which is the equivalent of the 0 and 1 in digital data. The design possibilities – or the way that parts can combine and aggregate – can be defined by the geometry and therefore, design agency, of the piece itself. This discrete method advances a theoretical argument about the nature of digital design as needing to be fundamentally discrete, and at the same time responding to ideas coming from open-source, distributed modes methods of production. Furthermore it responds to today’s housing crisis, providing for a more democratic and equitable framework for the production of housing. To think of architecture as wholly digital is to substantially disrupt the way that we think about design, authorship, ownership and process, as well as the building technologies and practices we use in contemporary architectural production.


Author(s):  
Gary Smith

Humans have invaluable real-world knowledge because we have accumulated a lifetime of experiences that help us recognize, understand, and anticipate. Computers do not have real-world experiences to guide them, so they must rely on statistical patterns in their digital data base—which may be helpful, but is certainly fallible. We use emotions as well as logic to construct concepts that help us understand what we see and hear. When we see a dog, we may visualize other dogs, think about the similarities and differences between dogs and cats, or expect the dog to chase after a cat we see nearby. We may remember a childhood pet or recall past encounters with dogs. Remembering that dogs are friendly and loyal, we might smile and want to pet the dog or throw a stick for the dog to fetch. Remembering once being scared by an aggressive dog, we might pull back to a safe distance. A computer does none of this. For a computer, there is no meaningful difference between dog, tiger, and XyB3c, other than the fact that they use different symbols. A computer can count the number of times the word dog is used in a story and retrieve facts about dogs (such as how many legs they have), but computers do not understand words the way humans do, and will not respond to the word dog the way humans do. The lack of real world knowledge is often revealed in software that attempts to interpret words and images. Language translation software programs are designed to convert sentences written or spoken in one language into equivalent sentences in another language. In the 1950s, a Georgetown–IBM team demonstrated the machine translation of 60 sentences from Russian to English using a 250-word vocabulary and six grammatical rules. The lead scientist predicted that, with a larger vocabulary and more rules, translation programs would be perfected in three to five years. Little did he know! He had far too much faith in computers. It has now been more than 60 years and, while translation software is impressive, it is far from perfect. The stumbling blocks are instructive. Humans translate passages by thinking about the content—what the author means—and then expressing that content in another language.


2013 ◽  
Vol 45 (2) ◽  
pp. 425-450 ◽  
Author(s):  
James Allen Fill ◽  
Takehiko Nakama

When the search algorithm QuickSelect compares keys during its execution in order to find a key of target rank, it must operate on the keys' representations or internal structures, which were ignored by the previous studies that quantified the execution cost for the algorithm in terms of the number of required key comparisons. In this paper we analyze running costs for the algorithm that take into account not only the number of key comparisons, but also the cost of each key comparison. We suppose that keys are represented as sequences of symbols generated by various probabilistic sources and that QuickSelect operates on individual symbols in order to find the target key. We identify limiting distributions for the costs, and derive integral and series expressions for the expectations of the limiting distributions. These expressions are used to recapture previously obtained results on the number of key comparisons required by the algorithm.


1993 ◽  
Vol 23 (2) ◽  
pp. 47-54 ◽  
Author(s):  
Peter Roach ◽  
Gerry Knowles ◽  
Tamas Varadi ◽  
Simon Arnfield

The purpose of this paper is to describe a new version of the Spoken English Corpus which will be of interest to phoneticians and other speech scientists. The Spoken English Corpus is a well-known collection of spoken-language texts that was collected and transcribed in the 1980's in a joint project involving IBM UK and the University of Lancaster (Alderson and Knowles forthcoming, Knowles and Taylor 1988). One valuable aspect of it is that the recorded material on which it was based is fairly freely available and the recording quality is generally good. At the time when the recordings were made, the idea of storing all the recorded material in digital form suitable for computer processing was of limited practicality. Although storage on digital tape was certainly feasible, this did not provide rapid computer access. The arrival of optical disk technology, with the possibility of storing very large amounts of digital data on a compact disk at relatively low cost, has brought about a revolution in ideas on database construction and use. It seemed to us that the recordings of the Spoken English Corpus (hereafter SEC) should now be converted into a form which would enable the user to gain access to the acoustic signal without the laborious business of winding through large amounts of tape. Once this was done, we should be able not only to listen to the recordings in a very convenient way, but also to carry out many automatic analyses of the material by computer.


2016 ◽  
Vol 6 (1) ◽  
Author(s):  
J. Mäkinen ◽  
R.A. Sermyagin ◽  
I.A. Oshchepkov ◽  
A.V. Basmanov ◽  
A.V. Pozdnyakov ◽  
...  

AbstractIn June–July 2013,we performed a comparison of five absolute gravimeters of different types. The gravimeters were the FG5X-221 of the FGI, the FG5-110 and GBL-M 002 of the TsNIIGaiK, the GABL-PM of the IAE SB RAS, and the GABL-M of the NIIMorGeofizika (Murmansk, Russia). The three last-mentioned are field-type portable gravimeters made by the Institute of Automation and Electrometry in Novosibirsk, and this is the first international comparison for them. This Russian-Finnish Comparison of Absolute Gravimeters RFCAG2013 was conducted at four sites with different characteristics: at the field sites Pulkovo and Svetloe near St. Petersburg, and at the laboratory sites TsNIIGaIK in Moscow and Zvenigorod near Moscow. At the TsNIIGAiK site and at Zvenigorod two piers were used, such that altogether six stations were occupied. The FG5X- 221 provides the link to the CCM.G-K2 Key Comparison in Luxembourg in November 2013. Recently, the Consultative Committee for Mass and Related Quantities and the International Association of Geodesy drafted a strategy on how to best transmit the results of Key Comparisons of absolute gravimeters to benefit the geodetic and geophysical gravimetric community. Our treatment of the RFCAG2013 presents one of the first practical applications of the ideas of the strategy document, andwe discuss the resulting uncertainty structure. Regarding the comparison results, we find the gravimeters show consistent offsets at the quite different sites. All except one gravimeter are in equivalence.


Metrologia ◽  
2010 ◽  
Vol 47 (1A) ◽  
pp. 08002-08002
Author(s):  
Sang-Hyub Oh ◽  
Byung Moon Kim ◽  
Qiao Han ◽  
Zeyi Zhou

2013 ◽  
Vol 45 (02) ◽  
pp. 425-450 ◽  
Author(s):  
James Allen Fill ◽  
Takehiko Nakama

When the search algorithm QuickSelect compares keys during its execution in order to find a key of target rank, it must operate on the keys' representations or internal structures, which were ignored by the previous studies that quantified the execution cost for the algorithm in terms of the number of required key comparisons. In this paper we analyze running costs for the algorithm that take into account not only the number of key comparisons, but also the cost of each key comparison. We suppose that keys are represented as sequences of symbols generated by various probabilistic sources and that QuickSelect operates on individual symbols in order to find the target key. We identify limiting distributions for the costs, and derive integral and series expressions for the expectations of the limiting distributions. These expressions are used to recapture previously obtained results on the number of key comparisons required by the algorithm.


2021 ◽  
Author(s):  
Francesca Frexia ◽  
Cecilia Mascia ◽  
Luca Lianas ◽  
Giovanni Delussu ◽  
Alessandro Sulis ◽  
...  

AbstractThe FAIR Principles are a set of recommendations that aim to underpin knowledge discovery and integration by making the research outcomes Findable, Accessible, Interoperable and Reusable. These guidelines encourage the accurate recording and exchange of structured data, coupled with contextual information about their creation, expressed in domain-specific standards and machine readable formats. This paper analyses the potential support to FAIRness of the openEHR e-health standard, by theoretically assessing the compliance with each of the 15 FAIR principles of a hypothetical Clinical Data Repository (CDR) developed according to the openEHR specifications. Our study highlights how the openEHR approach, thanks to its computable semantics-oriented design, is inherently FAIR-enabling and is a promising implementation strategy for creating FAIR-compliant CDRs.


2021 ◽  
Vol 8 (2) ◽  
pp. 180-185
Author(s):  
Anna Tolwinska

This article aims to explain the key metadata elements listed in Participation Reports, why it’s important to check them regularly, and how Crossref members can improve their scores. Crossref members register a lot of metadata in Crossref. That metadata is machine-readable, standardized, and then shared across discovery services and author tools. This is important because richer metadata makes content more discoverable and useful to the scholarly community. It’s not always easy to know what metadata Crossref members register in Crossref. This is why Crossref created an easy-to-use tool called Participation Reports to show editors, and researchers the key metadata elements Crossref members register to make their content more useful. The key metadata elements include references and whether they are set to open, ORCID iDs, funding information, Crossmark metadata, licenses, full-text URLs for text-mining, and Similarity Check indexing, as well as abstracts. ROR IDs (Research Organization Registry Identifiers), that identify institutions will be added in the future. This data was always available through the Crossref ’s REST API (Representational State Transfer Application Programming Interface) but is now visualized in Participation Reports. To improve scores, editors should encourage authors to submit ORCIDs in their manuscripts and publishers should register as much metadata as possible to help drive research further.


Sign in / Sign up

Export Citation Format

Share Document