automated processing
Recently Published Documents


TOTAL DOCUMENTS

532
(FIVE YEARS 186)

H-INDEX

31
(FIVE YEARS 6)

2022 ◽  
Vol 2 (1) ◽  
pp. 1-10
Author(s):  
Chengxin Jiang ◽  
Ping Zhang ◽  
Malcolm C. A. White ◽  
Robert Pickle ◽  
Meghan S. Miller

Abstract The tectonic setting of Timor–Leste and Eastern Indonesia comprises of a complex transition from oceanic lithosphere subduction to arc-continental collision. To better understand the deformation and convergent-zone structure of the region, we derive a new catalog of earthquake hypocenters and magnitudes from a temporary deployment of five years of continuous seismic data using an automated processing procedure. This includes a machine-learning phase picker, EQTransformer, and a sequential earthquake association and location workflow. We detect and locate ∼19,000 events during 2014–2018, which demonstrates that it is possible to characterize earthquake sequences from raw seismic data using a well-trained machine-learning picker for a complex convergent plate setting. This study provides the most complete catalog available for the region for the duration of the temporary deployment, which includes a complex pattern of crustal events across the collision zone and into the back-arc, as well as abundant deep slab seismicity.


2021 ◽  
Vol 54 (6) ◽  
pp. 257-270
Author(s):  
Andrey A. Pomerantsev ◽  
◽  
Alexander N. Starkin ◽  
Elena V. Chervyakova ◽  
◽  
...  

Introduction. The health of schoolchildren is the foundation for the educational process and the key to success in future work activities. The current level of development of technology and information technology allows you to bring health monitoring to a new, higher quality level. Purpose of the study: on the basis of a comparison of various historically established approaches to assessing the health of schoolchildren, to identify the main development trends and promising technologies suitable for determining the integral indicator of the health of schoolchildren. Research methodology and techniques. The research is theoretical. To search for information about innovative technologies, we analyzed scientific articles in Russian and English, taken from scientometric databases. As a result, we selected 14 technologies that were more consistent with the requirements of minimum time consumption, distance and invasiveness. Results. Technologies used in biomechanics, medicine, forensic science, navigation together can make it possible to comprehensively assess the psychological, neurodynamic, motor and energy components of schoolchildren's health. The most promising systems for assessing the health of schoolchildren are the following systems: an image processing system (technology for detecting and recognizing faces, technology for recognizing facial expressions and gestures), an optoelectronic measuring system (motion capture technology), an internal thermometry system (acoustothermometry), a navigation system, an electromagnetic measuring system, system of content analysis of Internet traffic, strain-dynamometric system, as well as neurotechnological system. The proposed approach requires significant information resources for the accumulation and automated processing of large amounts of information in a single analytical centre. The use of artificial intelligence algorithms will allow detecting hidden relationships of health indicators, assessing risks and giving personalized recommendations. On the basis of the information collected, it is planned to create an electronic passport of schoolchildren's health with further integration of this module into the domestic educational complex of an electronic student's diary.


2021 ◽  
Author(s):  
Ventsislav Karadjov ◽  

The concept of data protection by default and by design is fundamental step for understanding contemporary personal data protection processes. The principle of "data protection by design" has been introduced to protect the rights of individuals in the automated processing of personal data. It should be reflected in all contemporary epitome of digitalisation, including artificial intelligence. Its continuation is the data protection by default.


2021 ◽  
Vol 6 (2(62)) ◽  
pp. 10-14
Author(s):  
Victoria Kostenko ◽  
Olga Bulgakova ◽  
Barbara Stelyuk

The object of research is the components of an intelligent system for searching information in electronic repositories of unstructured documents, which based on the ontologies of the subject area. One of the most problematic areas is the processing and analysis of information contained in electronic repositories of unstructured documents. There are considered the some possibilities of increasing the efficiency of information processing. In the course of the study, using the method in which ontologies comprise sets of terms presented in it. In addition, the ontological set also includes information about subject areas, areas of definitions, etc. There are obtained the sequence of defining the conceptual representation of an intelligent search system based on ontological components. There are presented the composition of the ontological system model. There are described the main functional components of the system for intelligent processing of information about electronic documents. The proposed approaches for identifying the component components of the ontological model of the search system have a lot of features. This is due to the fact that the search system model must have a set of properties: integrity, coherence, organization, integrability, mobility. Ontologies which representing the basic concepts of the domain in a format available for automated processing in the form of a hierarchy of classes and relationships between them allow automated processing. The using of ontologies in the role of an intermediary between the user and the search process, between the search process and the search system that can facilitate the solution of a number of complex and non-standard tasks of information retrieval (for example, the automation of the search process). It is possible to solve the problem of knowledge representation for displaying information relevant to user requests, as well as to solve the problems of filtering and classifying information. Compared to similar well-known search systems, this provides such advantages as creating a common terminology for software agents and users, protecting the information store from total overflow and errors, as well as solving the issue of information aging.


2021 ◽  
pp. 110052
Author(s):  
Thien Dinh ◽  
Harris Panopoulos ◽  
Stan Poniger ◽  
Andrew M. Scott

2021 ◽  
Vol 2021 (2) ◽  
pp. 252-265
Author(s):  
Dmitriy Evdokimov ◽  
Dmitriy Kravchenko

Introduction. In the 1960s, the USSR and the United States raced not only in the sphere of arms and space exploration, but also in various socio-economic spheres, including advanced automated management systems in the field of economics, which treated economy as a single object of management. Study objects and methods. The present research involved declassified archival documents, as well as domestic and foreign works on automated control systems (ACS). Results and discussion. The authors analyzed the fundamental goals and objectives set by the leaders of the two superpowers, focusing on the nationwide automated processing and control systems (NAPCS), their operation principles, and the reasons behind their failure. They compared NAPCS with alternative systems, e.g. ACS-70, ACS-80, the system of the Kuntsevo radio engineering plant, ARPANET, etc., as well as with modern systems that were based on the Soviet heritage. Conclusion. Apparently, the USSR won the first part of the ACS race, but the project failed, and the USA with its ARPANET (1969) became the undisputed leader. However, most contemporary Russian situation centers are based on the Soviet studies.


2021 ◽  
Vol 6 (166) ◽  
pp. 103-107
Author(s):  
Y. Dorozhko ◽  
E. Zakharova ◽  
G. Sarkisian ◽  
P. Mikhno

The expediency of single-format technology of automated processing of geodetic measurements for the needs of the road construction industry is considered. This technology allows you to perform end-to-end automated processing of geodetic measurements with subsequent automated design and transfer the results of one design phase to the next in a single format and a single design environment. Through single-format automated technology for processing geodetic measurements and development of design solutions is to transfer the results of one stage of data processing or development of design solutions to another in one format and one software package. In the event of any corrections to previous results or the development of new solutions at any stage, all changes should be reflected in all parts of the project. This approach will allow the use of a one-time digitally developed highway project at all subsequent stages in the development of overhaul and reconstruction projects, subject to changes in the digital model of the area. The digital model of the terrain and section of the highway constructed in this way can be constantly adjusted and used at the stages of geodetic surveys, design, construction or repair and maintenance until the next geodetic surveys. End-to-end single-format cycle includes: design, technological design, engineering analysis, control programs. This ensures the integrity of the geometry in the transition to each subsequent stage. Providing end-to-end automated automated processing of geodetic measurement results for road repair or construction design tasks can be done by involving software products such as «CREDO», «Topomatic Robur», «Autodesk Civil 3D» and others. The use of single-format end-to-end automated processing of geodetic measurements with the subsequent construction of a digital terrain model will speed up and facilitate the development of design solutions, improve their quality, which in turn improves the quality of roads and man-made structures.


2021 ◽  
Author(s):  
Tristan Dubos ◽  
Axel Poulet ◽  
Geoffrey Thomson ◽  
Emilie Pery ◽  
Frederic Chausse ◽  
...  

Background: The three-dimensional nuclear arrangement of chromatin impacts many cellular processes operating at the DNA level in animal and plant systems. Chromatin organization is a dynamic process that can be affected by biotic and abiotic stresses. Three-dimensional imaging technology allows to follow these dynamic changes, but only a few semi-automated processing methods currently exist for quantitative analysis of the 3D chromatin organization. Results: We present an automated method, Nuclear Object DetectionJ (NODeJ), developed as an imageJ plugin. This program segments and analyzes high intensity domains in nuclei from 3D images. NODeJ performs a Laplacian convolution on the mask of a nucleus to enhance the contrast of intra-nuclear objects and allows their detection. We reanalyzed public datasets and determined that NODeJ is able to accurately identify heterochromatin domains from a diverse set of Arabidopsis thaliana nuclei stained with DAPI or Hoechst. NODeJ is also able to detect signals in nuclei from DNA FISH experiments, allowing for the analysis of specific targets of interest. Conclusion and availability: NODeJ allows for efficient automated analysis of subnuclear structures by avoiding the semi-automated steps, resulting in reduced processing time and analytical bias. NODeJ is written in Java and provided as an ImageJ plugin with a command line option to perform more high-throughput analyses. NODeJ can be downloaded from https://gitlab.com/axpoulet/image2danalysis/-/releases with source code, documentation and further information avaliable at https://gitlab.com/axpoulet/image2danalysis. The images used in this study are publicly available at https://www.brookes.ac.uk/indepth/images/ and https://doi.org/10.15454/1HSOIE.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Daniel L. Morrell ◽  
Timothy R. Moake ◽  
Michele N. Medina-Craven

Purpose This paper discusses how minor counterproductive workplace behavior (CWB) scripts can be acquired or learned through automated processes from one employee to another. Design/methodology/approach This research is based on insights from social information processing and automated processing. Findings This paper helps explain the automated learning of minor CWBs from one’s coworkers. Practical implications While some employees purposefully engage in counterproductive workplace behaviors with the intent to harm their organizations, other less overt and minor behaviors are not always carried out with harmful intent, but remain counterproductive, nonetheless. By understanding how the transfer of minor CWBs occurs, employers can strive to set policies and practices in place to help reduce these occurrences. Originality/value This paper discusses how negative workplace learning can occur. We hope to contribute to the workplace learning literature by highlighting how and why the spread of minor CWBs occurs amongst coworkers and spur future research focusing on appropriate interventions.


2021 ◽  
Vol 14 (11) ◽  
pp. 6711-6740
Author(s):  
Ranee Joshi ◽  
Kavitha Madaiah ◽  
Mark Jessell ◽  
Mark Lindsay ◽  
Guillaume Pirot

Abstract. A huge amount of legacy drilling data is available in geological survey but cannot be used directly as they are compiled and recorded in an unstructured textual form and using different formats depending on the database structure, company, logging geologist, investigation method, investigated materials and/or drilling campaign. They are subjective and plagued by uncertainty as they are likely to have been conducted by tens to hundreds of geologists, all of whom would have their own personal biases. dh2loop (https://github.com/Loop3D/dh2loop, last access: 30 September 2021​​​​​​​) is an open-source Python library for extracting and standardizing geologic drill hole data and exporting them into readily importable interval tables (collar, survey, lithology). In this contribution, we extract, process and classify lithological logs from the Geological Survey of Western Australia (GSWA) Mineral Exploration Reports (WAMEX) database in the Yalgoo–Singleton greenstone belt (YSGB) region. The contribution also addresses the subjective nature and variability of the nomenclature of lithological descriptions within and across different drilling campaigns by using thesauri and fuzzy string matching. For this study case, 86 % of the extracted lithology data is successfully matched to lithologies in the thesauri. Since this process can be tedious, we attempted to test the string matching with the comments, which resulted in a matching rate of 16 % (7870 successfully matched records out of 47 823 records). The standardized lithological data are then classified into multi-level groupings that can be used to systematically upscale and downscale drill hole data inputs for multiscale 3D geological modelling. dh2loop formats legacy data bridging the gap between utilization and maximization of legacy drill hole data and drill hole analysis functionalities available in existing Python libraries (lasio, welly, striplog).


Sign in / Sign up

Export Citation Format

Share Document