digital data
Recently Published Documents


TOTAL DOCUMENTS

3789
(FIVE YEARS 1364)

H-INDEX

52
(FIVE YEARS 10)

AI & Society ◽  
2022 ◽  
Author(s):  
Lise Jaillant ◽  
Annalina Caputo

AbstractCo-authored by a Computer Scientist and a Digital Humanist, this article examines the challenges faced by cultural heritage institutions in the digital age, which have led to the closure of the vast majority of born-digital archival collections. It focuses particularly on cultural organizations such as libraries, museums and archives, used by historians, literary scholars and other Humanities scholars. Most born-digital records held by cultural organizations are inaccessible due to privacy, copyright, commercial and technical issues. Even when born-digital data are publicly available (as in the case of web archives), users often need to physically travel to repositories such as the British Library or the Bibliothèque Nationale de France to consult web pages. Provided with enough sample data from which to learn and train their models, AI, and more specifically machine learning algorithms, offer the opportunity to improve and ease the access to digital archives by learning to perform complex human tasks. These vary from providing intelligent support for searching the archives to automate tedious and time-consuming tasks.  In this article, we focus on sensitivity review as a practical solution to unlock digital archives that would allow archival institutions to make non-sensitive information available. This promise to make archives more accessible does not come free of warnings for potential pitfalls and risks: inherent errors, "black box" approaches that make the algorithm inscrutable, and risks related to bias, fake, or partial information. Our central argument is that AI can deliver its promise to make digital archival collections more accessible, but it also creates new challenges - particularly in terms of ethics. In the conclusion, we insist on the importance of fairness, accountability and transparency in the process of making digital archives more accessible.


2022 ◽  
Vol 2 (1) ◽  
pp. 54-65
Author(s):  
John Given

In this paper it is argued that digital technologies will have a transformative effect in the social sciences in general and in the fast developing field of narrative studies in particular. It is argued that the integrative and interdisciplinary nature of narrative approaches are further enhanced by the development of digital technologies and that the collection of digital data will also drive theoretical and methodological developments in narrative studies. Biographical Sociology will also need to take account of lives lived in, and transformed by, the digital domain. How these technologies may influence data collection methods, how they might influence thinking about what constitutes data, and what effects this might have on the remodeling of theoretical approaches are all pressing questions for the development of a Twenty First Century narratology. As Marshall McLuhan once put it “First we shape our tools and then our tools shape us”.


2022 ◽  
Author(s):  
Anatoly Soloviev ◽  
Dmitry Peregoudov

Abstract In 2019, the WDC for Solar-Terrestrial Physics in Moscow digitized the archive of observations of the Earth’s magnetic field carried out by the Soviet satellites Kosmos-49 (1964) and Kosmos-321 (1970). As a result, the scientific community for the first time obtained access to a unique digital data set, which was registered at the very beginning of the scientific space era. This article sets out three objectives. First, the quality of the obtained measurements is assessed by their comparison with the IGRF reference field model. Secondly, we assess the quality of the models, which at that time were derived from the data of these two satellites and ground-based observations. Thirdly, we propose a new, improved model of the geomagnetic field secular variation based on the scalar measurements of the Kosmos-49 and Kosmos-321 satellites using modern mathematical methods.


Author(s):  
Bouslehi Hamdi ◽  
Seddik Hassen ◽  
Amaria Wael

The security of digital data has become an essential need. Because the storage, transmission of data social networking has became inevitable, the need to ensure that data is no longer a luxury but an absolute necessity. The picture takes a large part of the data and its presence every day more important. in this paper, we use a new technique encryption which is poly encryption, this technique is based on the Decomposition of the image in a random manner, and each block will be encrypted by an algorithm that used different algorithm


F1000Research ◽  
2022 ◽  
Vol 11 ◽  
pp. 12
Author(s):  
Franco Röckel ◽  
Toni Schreiber ◽  
Danuta Schüler ◽  
Ulrike Braun ◽  
Ina Krukenberg ◽  
...  

With the ongoing cost decrease of genotyping and sequencing technologies, accurate and fast phenotyping remains the bottleneck in the utilizing of plant genetic resources for breeding and breeding research. Although cost-efficient high-throughput phenotyping platforms are emerging for specific traits and/or species, manual phenotyping is still widely used and is a time- and money-consuming step. Approaches that improve data recording, processing or handling are pivotal steps towards the efficient use of genetic resources and are demanded by the research community. Therefore, we developed PhenoApp, an open-source Android app for tablets and smartphones to facilitate the digital recording of phenotypical data in the field and in greenhouses. It is a versatile tool that offers the possibility to fully customize the descriptors/scales for any possible scenario, also in accordance with international information standards such as MIAPPE (Minimum Information About a Plant Phenotyping Experiment) and FAIR (Findable, Accessible, Interoperable, and Reusable) data principles. Furthermore, PhenoApp enables the use of pre-integrated ready-to-use BBCH (Biologische Bundesanstalt für Land- und Forstwirtschaft, Bundessortenamt und CHemische Industrie) scales for apple, cereals, grapevine, maize, potato, rapeseed and rice. Additional BBCH scales can easily be added. The simple and adaptable structure of input and output files enables an easy data handling by either spreadsheet software or even the integration in the workflow of laboratory information management systems (LIMS). PhenoApp is therefore a decisive contribution to increase efficiency of digital data acquisition in genebank management but also contributes to breeding and breeding research by accelerating the labour intensive and time-consuming acquisition of phenotyping data.


Photonics ◽  
2022 ◽  
Vol 9 (1) ◽  
pp. 33
Author(s):  
Shu-Hao Chang

Machine learning in photonics has potential in many industries. However, research on patent portfolios is still lacking. The purpose of this study was to assess the status of machine learning in photonics technology and patent portfolios and investigate major assignees to generate a better understanding of the developmental trends of machine learning in photonics. This can provide governments and industry with a resource for planning strategic development. I used data-mining methods (correspondence analysis and K-means clustering) to explore competing technological and strategic-group relationships within the field of machine learning in photonics. The data were granted patents in the USPTO database from 2019 to 2020. The results reveal that patents were primarily in image data processing, electronic digital data processing, wireless communication networks, and healthcare informatics and diagnosis. I assessed the relative technological advantages of various assignees and propose policy recommendations for technology development.


Author(s):  
Biagio Aragona

Solo con un conocimiento más consciente de los diferentes tipos de big data y sus posibles usos, límites y ventajas la sociología se beneficiará realmente de estas bases empíricas. En este artículo, a partir de una clasificación de los diversos tipos de big data, se describen algunas áreas de uso en la investigación social destacando cuestiones críticas y problemas éticos. Los límites se vinculados a cuestiones fundamentales relativas a la calidad de los big data. Otra cuestión clave se refiere al acceso. Otro aspecto metodológico a tener en cuenta es que los datos digitales en la web deben considerarse no intrusivos. Los métodos de investigación encubiertos ha desafiado la práctica de evaluación ética establecidas adoptadas en la mayoría de las instituciones de investigación: el consentimiento informado. Las pautas éticas digitales no pueden ser universales y estar establecidas de una vez por todas. Only through expert knowledge of the different types of big data and their possible uses, limits and advantages will sociology benefit from these empirical bases. In this article, based on a classification of the various types of big data, some areas of use in social research are described, highlighting critical questions and ethical problems. The limits are related to fundamental questions regarding the quality of big data. Another paramount issue concerns access. A further methodological aspect is that digital data on the web should be considered non-intrusive. Covert research methods have challenged the established ethical evaluation practice adopted in most research institutions: informed consent. Digital ethical guidelines cannot be universal and established once and for all.


2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Loris Belcastro ◽  
Riccardo Cantini ◽  
Fabrizio Marozzo ◽  
Alessio Orsino ◽  
Domenico Talia ◽  
...  

AbstractIn the age of the Internet of Things and social media platforms, huge amounts of digital data are generated by and collected from many sources, including sensors, mobile devices, wearable trackers and security cameras. This data, commonly referred to as Big Data, is challenging current storage, processing, and analysis capabilities. New models, languages, systems and algorithms continue to be developed to effectively collect, store, analyze and learn from Big Data. Most of the recent surveys provide a global analysis of the tools that are used in the main phases of Big Data management (generation, acquisition, storage, querying and visualization of data). Differently, this work analyzes and reviews parallel and distributed paradigms, languages and systems used today to analyze and learn from Big Data on scalable computers. In particular, we provide an in-depth analysis of the properties of the main parallel programming paradigms (MapReduce, workflow, BSP, message passing, and SQL-like) and, through programming examples, we describe the most used systems for Big Data analysis (e.g., Hadoop, Spark, and Storm). Furthermore, we discuss and compare the different systems by highlighting the main features of each of them, their diffusion (community of developers and users) and the main advantages and disadvantages of using them to implement Big Data analysis applications. The final goal of this work is to help designers and developers in identifying and selecting the best/appropriate programming solution based on their skills, hardware availability, application domains and purposes, and also considering the support provided by the developer community.


2022 ◽  
Vol 12 (2) ◽  
pp. 551
Author(s):  
Andrea Scribante ◽  
Simone Gallo ◽  
Maurizio Pascadopoli ◽  
Pietro Canzi ◽  
Stefania Marconi ◽  
...  

In the last years, both medicine and dentistry have come across a revolution represented by the introduction of more and more digital technologies for both diagnostic and therapeutic purposes. Additive manufacturing is a relatively new technology consisting of a computer-aided design and computer-aided manufacturing (CAD/CAM) workflow, which allows the substitution of many materials with digital data. This process requires three fundamental steps represented by the digitalization of an item through a scanner, the editing of the data acquired using a software, and the manufacturing technology to transform the digital data into a final product, respectively. This narrative review aims to discuss the recent introduction in dentistry of the abovementioned digital workflow. The main advantages and disadvantages of the process will be discussed, along with a brief description of the possible applications on orthodontics.


MAUSAM ◽  
2022 ◽  
Vol 44 (4) ◽  
pp. 347-352
Author(s):  
S. N. BHATTACHARYA

Digital records of seismic waves observed at Seismic Research Observatory, Cheng Mai. Thailand have been analysed for two earthquakes in western Nepal. Digital data are processed by the floating filter and phase equalization methods to obtain surface waves free from noise. Group velocities of Love and Rayleigh waves are obtained by frequency time analysis of these noise free surface waves. The period of group velocities ranges from 17 to 62 sec for fundamental mode Rayleigh waves and from 17 to 66 sec for fundamental mode Love waves. The wave paths cross both central Myanmar (Burma) and the Indo-Gangetic plain. The group velocity data of surface waves across central Myanmar (Burma) have been obtained after correction of the data for the path across the Indo-Gangetic plain. Inversion of data gives the average crustal and subcrustal structure of central Myanmar (Burma). The modelled structure shows two separate sedimentary layers each of  8 km thick, The lower sedimentary layer forms the low velocity zone of the crust. The total thickness of central Myanmar (Burma) crust is found to be 55 km


Sign in / Sign up

Export Citation Format

Share Document