data production
Recently Published Documents


TOTAL DOCUMENTS

345
(FIVE YEARS 142)

H-INDEX

16
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Benjamin Grant Purzycki ◽  
Theiss Bendixen ◽  
Aaron Lightner

The target article from Turchin et al. assesses the relationship between social complexity and moralistic supernatural punishment. In our evaluation of their project, we argue that each step of its workflow -- from data production and theory to modeling and reporting -- makes it impossible to test the hypothesis that its authors claim they are testing. We focus our discussion on three important classes of issues: problems of data, analysis, and causal inference.


2022 ◽  
Vol 16 (4) ◽  
pp. 122-129
Author(s):  
Sanat Seitov

The research was carried out in order to highlight the main problems that impede the increase in the competitiveness of Kazakhstani animal husbandry. The indicators of productivity (milk yield, shearing of wool from one sheep, etc.), as well as aggregated data (production volumes, indices of the physical volume of gross production) were used as criteria for assessing the development of the industry. In Kazakhstan, the share of beef pedigree cattle in 2019 accounted for only 11.5% of the total cattle population. The average live weight of cattle was 336 kg, the average slaughter weight was 175 kg, which is 2 times lower than world standards, the average live weight of 1 bird was 2.2 kg. The republic has a weak base for the production of basic feed for the fattening contingent, due to which its supply with such feed is at the level of 57.8% of the scientifically grounded norm. The share of breeding stock of dairy cattle (as of January 1, 2018) is 2.8%, birds of all types - 12.3% of the total livestock, sheep - 14.8%. In modern conditions, in order to increase competitiveness, it is necessary to focus efforts on solving such problems as providing highly productive breeding cattle and poultry; improving the fodder base by expanding the crops of corn, soybeans, alfalfa, chickpea; strengthening of preventive work against especially dangerous animal diseases; adaptation of scientific developments in the field of genetics, selection and fodder production to the current economic conditions in animal husbandry; accelerating the transfer of animal husbandry to new technologies; implementation of international standards for product quality and management


2021 ◽  
pp. 27-46
Author(s):  
Day-Yang Liu ◽  
Hui-Chien Fan ◽  
Joseph C.P. Shieh ◽  
Cheng-Hsien Lin

Abstract Taiwan has proven itself successful at both inventing the key technologies leading to the development of 5G (fifth generation wireless technology)-related industries and serving as an indispensable link in the burgeoning 5G-industrial global supply chain. This study analyzes the current state of Taiwan’s 5G industry via the utilization the purpose of this study is to the Dynamic Slacks-Based Measure (DSBM). To achieve this purpose, a dynamic-data production process model was developed to analyze the 5G industry’s overall relative efficiency. Results indicate that (1) key chip-producing companies typically experience increased efficiency following 5G R&D industry development investment, and that said companies’ relative efficiency is, indeed, affected positively by R&D investment; and (2) key chip companies’ relative increases of efficiency were higher than those of brand terminal and downstream-industry-category companies, while companies with higher levels of R&D investment exhibited relatively higher and more significant levels of efficiency. Finally, it was discovered that the relative efficiency of Taiwan's 5G-related industries’ R&D investment was, indeed, statistically significant in terms of the Taiwanese government’s industrial policies regarding 5G R&D investment. Keywords: Dynamic Slacks-Based Measure (DSBM), 5G Industry, R&D Inputs.


2021 ◽  
Author(s):  
Ashleigh Hawkins

AbstractMass digitisation and the exponential growth of born-digital archives over the past two decades have resulted in an enormous volume of archives and archival data being available digitally. This has produced a valuable but under-utilised source of large-scale digital data ripe for interrogation by scholars and practitioners in the Digital Humanities. However, current digitisation approaches fall short of the requirements of digital humanists for structured, integrated, interoperable, and interrogable data. Linked Data provides a viable means of producing such data, creating machine-readable archival data suited to analysis using digital humanities research methods. While a growing body of archival scholarship and praxis has explored Linked Data, its potential to open up digitised and born-digital archives to the Digital Humanities is under-examined. This article approaches Archival Linked Data from the perspective of the Digital Humanities, extrapolating from both archival and digital humanities Linked Data scholarship to identify the benefits to digital humanists of the production and provision of access to Archival Linked Data. It will consider some of the current barriers preventing digital humanists from being able to experience the benefits of Archival Linked Data evidenced, and to fully utilise archives which have been made available digitally. The article argues for increased collaboration between the two disciplines, challenges individuals and institutions to engage with Linked Data, and suggests the incorporation of AI and low-barrier tools such as Wikidata into the Linked Data production workflow in order to scale up the production of Archival Linked Data as a means of increasing access to and utilisation of digitised and born-digital archives.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Benjamin Kiessling ◽  
Gennady Kurin ◽  
Matthew Thomas Miller ◽  
Kader Smail

This work presents an accuracy study of the open source OCR engine, Kraken, on the leading Arabic scholarly journal, al-Abhath. In contrast with other commercially available OCR engines, Kraken is shown to be capable of producing highly accurate Arabic-script OCR. The study also assesses the relative accuracy of typeface-specific and generalized models on the al-Abhath data and provides a microanalysis of the “error instances” and the contextual features that may have contributed to OCR misrecognition. Building on this analysis, the paper argues that Arabic-script OCR can be significantly improved through (1) a more systematic approach to training data production, and (2) the development of key technological components, especially multi-language models and improved line segmentation and layout analysis./Cet article présente une étude d’exactitude du moteur ROC open source, Krakan, sur la revue académique arabe de premier rang, al-Abhath. Contrairement à d’autres moteurs ROC disponibles sur le marché, Kraken se révèle être capable de produire de la ROC extrêmement exacte de l’écriture arabe. L’étude évalue aussi l’exactitude relative des modèles spécifiquement configurés à des polices et celle des modèles généralisés sur les données d’al-Abhath et fournit une microanalyse des « occurrences d’erreurs », ainsi qu’une microanalyse des éléments contextuels qui pourraient avoir contribué à la méreconnaissance ROC. S’appuyant sur cette analyse, cet article fait valoir que la ROC de l’écriture arabe peut être considérablement améliorée grâce à (1) une approche plus systématique d’entraînement de la production de données et (2) grâce au développement de composants technologiques fondamentaux, notammentl’amélioration des modèles multilingues, de la segmentation de ligne et de l’analyse de la mise en page.


2021 ◽  
Vol 3 ◽  
Author(s):  
Gretchen L. Mullendore ◽  
Matthew S. Mayernik ◽  
Douglas C. Schuster

There is strong agreement across the sciences that replicable workflows are needed for computational modeling. Open and replicable workflows not only strengthen public confidence in the sciences, but also result in more efficient community science. However, the massive size and complexity of geoscience simulation outputs, as well as the large cost to produce and preserve these outputs, present problems related to data storage, preservation, duplication, and replication. The simulation workflows themselves present additional challenges related to usability, understandability, documentation, and citation. These challenges make it difficult for researchers to meet the bewildering variety of data management requirements and recommendations across research funders and scientific journals. This paper introduces initial outcomes and emerging themes from the EarthCube Research Coordination Network project titled “What About Model Data? - Best Practices for Preservation and Replicability,” which is working to develop tools to assist researchers in determining what elements of geoscience modeling research should be preserved and shared to meet evolving community open science expectations.Specifically, the paper offers approaches to address the following key questions:• How should preservation of model software and outputs differ for projects that are oriented toward knowledge production vs. projects oriented toward data production?• What components of dynamical geoscience modeling research should be preserved and shared?• What curation support is needed to enable sharing and preservation for geoscience simulation models and their output?• What cultural barriers impede geoscience modelers from making progress on these topics?


2021 ◽  
pp. 53-75
Author(s):  
Gabriela Fatková ◽  
Tereza Šlehoferová

In this article, we present how, using the example of research studies connected with the perception of a landscape, some structured methods of data production used primarily in cognitive anthropology can be applied along with the possibilities of data analysis visualization using geographic information systems. We show the process of working with data gained by qualitative techniques and transferred, using semantic domain analysis, to the GIS interface, and outline the room for interpretation opened up by such a multilevel approach using various tools. Although we subjected the described procedures to pilot verification in our own research, the connection of the presented methodological approaches is still open to scientific discussion and, above all, to further experimentation.


2021 ◽  
Author(s):  
Lambertus Michael Alink ◽  
Edward T Eng ◽  
Robert Gheorghita ◽  
William Rice ◽  
Anchi Cheng ◽  
...  

Recent developments in cryo-electron microscopy (cryoEM) have led to the routine determination of structures at near atomic resolution and greatly increased the number of biomedical researchers wanting access to high-end cryoEM instrumentation. The high costs and long wait times for gaining access encourages facilities to maximize instrument uptime for data collection. To support these goals, we developed a System Environmental Metrics Collector for facilities (SEMCf) that serves as a laboratory performance and management tool. SEMCf consists of an architecture of automated and robust sensors that track, organize and report key facility metrics. The individual sensors are connected to Raspberry Pi (RPi) single board computers installed in close proximity to the input metrics being measured. The system is controlled by a central server that may be installed on a RPi or existing microscope support PC. Tracking the system and the environment provides feedback of imminent issues, suggestions for interventions that are needed to optimize data production, and indications as to when preventative maintenance should be scheduled. The sensor components are relatively inexpensive and widely commercially available, and the open-source design and software enables straightforward implementation, customization, and optimization by any facility that would benefit from real time environmental monitoring and reporting.


Sign in / Sign up

Export Citation Format

Share Document