scholarly journals Algorithmic Law: Law Production by Data or Data Production by Law?

2021 ◽  
pp. 78-92
Author(s):  
Mariavittoria Catanzariti
Keyword(s):  
2021 ◽  
Vol 5 (1) ◽  
pp. 15
Author(s):  
Dimitris Koryzis ◽  
Apostolos Dalas ◽  
Dimitris Spiliotopoulos ◽  
Fotios Fitsilis

Societies are entering the age of technological disruption, which also impacts governance institutions such as parliamentary organizations. Thus, parliaments need to adjust swiftly by incorporating innovative methods into their organizational culture and novel technologies into their working procedures. Inter-Parliamentary Union World e-Parliament Reports capture digital transformation trends towards open data production, standardized and knowledge-driven business processes, and the implementation of inclusive and participatory schemes. Nevertheless, there is still a limited consensus on how these trends will materialize into specific tools, products, and services, with added value for parliamentary and societal stakeholders. This article outlines the rapid evolution of the digital parliament from the user perspective. In doing so, it describes a transformational framework based on the evaluation of empirical data by an expert survey of parliamentarians and parliamentary administrators. Basic sets of tools and technologies that are perceived as vital for future parliamentary use by intra-parliamentary stakeholders, such as systems and processes for information and knowledge sharing, are analyzed. Moreover, boundary conditions for development and implementation of parliamentary technologies are set and highlighted. Concluding recommendations regarding the expected investments, interdisciplinary research, and cross-sector collaboration within the defined framework are presented.


Separations ◽  
2021 ◽  
Vol 8 (7) ◽  
pp. 104
Author(s):  
Leah M. Arrigo ◽  
Jun Jiang ◽  
Zachary S. Finch ◽  
James M. Bowen ◽  
Staci M. Herman ◽  
...  

The measurement of radioactive fission products from nuclear events has important implications for nuclear data production, environmental monitoring, and nuclear forensics. In a previous paper, the authors reported the optimization of an intra-group lanthanide separation using LN extraction resin from Eichrom Technologies®, Inc. and a nitric acid gradient. In this work, the method was demonstrated for the separation and quantification of multiple short-lived fission product lanthanide isotopes from a fission product sample produced from the thermal irradiation of highly enriched uranium. The separations were performed in parallel in quadruplicate with reproducible results and high decontamination factors for 153Sm, 156Eu, and 161Tb. Based on the results obtained here, the fission yields for 144Ce, 153Sm, 156Eu, and 161Tb are consistent with published fission yields. This work demonstrates the effectiveness of the separations for the intended application of short-lived lanthanide fission product analysis requiring high decontamination factors.


2021 ◽  
Vol 14 ◽  
pp. 194008292110147
Author(s):  
Dipto Sarkar ◽  
Colin A. Chapman

The term ‘smart forest’ is not yet common, but the proliferation of sensors, algorithms, and technocentric thinking in conservation, as in most other aspects of our lives, suggests we are at the brink of this evolution. While there has been some critical discussion about the value of using smart technology in conservation, a holistic discussion about the broader technological, social, and economic interactions involved with using big data, sensors, artificial intelligence, and global corporations is largely missing. Here, we explore the pitfalls that are useful to consider as forests are gradually converted to technological sites of data production for optimized biodiversity conservation and are consequently incorporated in the digital economy. We consider who are the enablers of the technologically enhanced forests and how the gradual operationalization of smart forests will impact the traditional stakeholders of conservation. We also look at the implications of carpeting forests with sensors and the type of questions that will be encouraged. To contextualize our arguments, we provide examples from our work in Kibale National Park, Uganda which hosts the one of the longest continuously running research field station in Africa.


Author(s):  
J. R. Mullaney ◽  
L. Makrygianni ◽  
V. Dhillon ◽  
S. Littlefair ◽  
K. Ackley ◽  
...  

Abstract The past few decades have seen the burgeoning of wide-field, high-cadence surveys, the most formidable of which will be the Legacy Survey of Space and Time (LSST) to be conducted by the Vera C. Rubin Observatory. So new is the field of systematic time-domain survey astronomy; however, that major scientific insights will continue to be obtained using smaller, more flexible systems than the LSST. One such example is the Gravitational-wave Optical Transient Observer (GOTO) whose primary science objective is the optical follow-up of gravitational wave events. The amount and rate of data production by GOTO and other wide-area, high-cadence surveys presents a significant challenge to data processing pipelines which need to operate in near-real time to fully exploit the time domain. In this study, we adapt the Rubin Observatory LSST Science Pipelines to process GOTO data, thereby exploring the feasibility of using this ‘off-the-shelf’ pipeline to process data from other wide-area, high-cadence surveys. In this paper, we describe how we use the LSST Science Pipelines to process raw GOTO frames to ultimately produce calibrated coadded images and photometric source catalogues. After comparing the measured astrometry and photometry to those of matched sources from PanSTARRS DR1, we find that measured source positions are typically accurate to subpixel levels, and that measured L-band photometries are accurate to $\sim50$ mmag at $m_L\sim16$ and $\sim200$ mmag at $m_L\sim18$ . These values compare favourably to those obtained using GOTO’s primary, in-house pipeline, gotophoto, in spite of both pipelines having undergone further development and improvement beyond the implementations used in this study. Finally, we release a generic ‘obs package’ that others can build upon, should they wish to use the LSST Science Pipelines to process data from other facilities.


2016 ◽  
Vol 2016 ◽  
pp. 1-16 ◽  
Author(s):  
Jennifer D. Hintzsche ◽  
William A. Robinson ◽  
Aik Choon Tan

Whole Exome Sequencing (WES) is the application of the next-generation technology to determine the variations in the exome and is becoming a standard approach in studying genetic variants in diseases. Understanding the exomes of individuals at single base resolution allows the identification of actionable mutations for disease treatment and management. WES technologies have shifted the bottleneck in experimental data production to computationally intensive informatics-based data analysis. Novel computational tools and methods have been developed to analyze and interpret WES data. Here, we review some of the current tools that are being used to analyze WES data. These tools range from the alignment of raw sequencing reads all the way to linking variants to actionable therapeutics. Strengths and weaknesses of each tool are discussed for the purpose of helping researchers make more informative decisions on selecting the best tools to analyze their WES data.


2012 ◽  
Vol 256-259 ◽  
pp. 2279-2284
Author(s):  
Lian Ying Li ◽  
Zhang Huang ◽  
Xiao Lan Xu

A necessary updating degree is vital for the digital map data in a vehicle navigation system. Only when the digital map data are well updated, can the quality of the navigation be assured. Today the companies devoting to the production of digital map data for vehicle navigation have to cost much labor, material and capital to collect and update data in order to maintain a necessary updating degree. Throughout the history of electronic navigation data updating, they have made considerable progress both on the methods and processes of data production, and the way of map management. Updating from the CD to the network, from the wired to the wireless, from the replacing to the incremental way, each of the technical changes is a power source to enhance the data updating rate. As we all know, the change detection is a prerequisite and base for the electronic navigation data updating. By rapidly developing the area with changes and using the appropriate updating method, we can scientifically maintain the original database of navigation data and terminal physical data. In view of this, starting from application needs for dynamic data updating, this paper analyses change detection methods of navigation data in different versions used for generating incremental data, and focuses on that of rasterizing features and attributes, exploring a new approach to quickly get the incremental data between versions.


2012 ◽  
Author(s):  
Agis Papantoniou ◽  
Marios Meimaris ◽  
Michalis N. Vafopoulos ◽  
Ioannis Anagnostopoulos ◽  
Giorgos Alexiou ◽  
...  

2012 ◽  
Vol 5s1 ◽  
pp. BII.S9042 ◽  
Author(s):  
John P. Pestian ◽  
Pawel Matykiewicz ◽  
Michelle Linn-Gust ◽  
Brett South ◽  
Ozlem Uzuner ◽  
...  

This paper reports on a shared task involving the assignment of emotions to suicide notes. Two features distinguished this task from previous shared tasks in the biomedical domain. One is that it resulted in the corpus of fully anonymized clinical text and annotated suicide notes. This resource is permanently available and will (we hope) facilitate future research. The other key feature of the task is that it required categorization with respect to a large set of labels. The number of participants was larger than in any previous biomedical challenge task. We describe the data production process and the evaluation measures, and give a preliminary analysis of the results. Many systems performed at levels approaching the inter-coder agreement, suggesting that human-like performance on this task is within the reach of currently available technologies.


Sign in / Sign up

Export Citation Format

Share Document