data consistency
Recently Published Documents


TOTAL DOCUMENTS

559
(FIVE YEARS 172)

H-INDEX

32
(FIVE YEARS 2)

2022 ◽  
Vol 12 (2) ◽  
pp. 748
Author(s):  
Seong Jin Lim ◽  
Young Lae Kim ◽  
Sungjong Cho ◽  
Ik Keun Park

Pipes of various shapes constitute pipelines utilized in industrial sites. These pipes are coupled through welding, wherein complex curvatures such as a flange, an elbow, a reducer, and a branch pipe are often found. Using phased array ultrasonic testing (PAUT) to inspect weld zones with complex curvatures is faced with different challenges due to parts that are difficult to contact with probes, small-diameter pipes, spatial limitations due to adjacent pipes, nozzles, and sloped shapes. In this study, we developed a flexible PAUT probe (FPAPr) and a semi-automatic scanner that was improved to enable stable FPAPr scanning for securing its inspection data consistency and reproducibility. A mock-up test specimen was created for a flange, an elbow, a reducer, and a branch pipe. Artificial flaws were inserted into the specimen through notch and hole processing, and simulations and verification experiments were performed to verify the performance and field applicability of the FPAPr and semi-automatic scanner.


2022 ◽  
Vol 1 ◽  
pp. 92
Author(s):  
Alicia Gómez Sánchez ◽  
Yolanda Álvarez ◽  
Basilio Colligris ◽  
Breandán N. Kennedy

The optokinetic response (OKR) is an effective behavioural assay to investigate functional vision in zebrafish. The rapid and widespread use of gene editing, drug screening and environmental modulation technologies has resulted in a broader need for visual neuroscience researchers to access affordable and more sensitive OKR, contrast sensitivity (CS) and visual acuity (VA) assays. Here, we demonstrate how 2D- and 3D-printed, striped patterns or drums coupled with a motorised base and microscope provide a simple, cost-effective but efficient means to assay OKR, CS and VA in larval-juvenile zebrafish. In wild-type, five days post-fertilisation (dpf) zebrafish, the 2D or 3D set-ups of 0.02 cycles per degree (cpd) (standard OKR stimulus) and 100% black-white contrast evoked equivalent responses of 24.2±3.9 or 21.8±3.9 saccades per minute, respectively. Furthermore, although the OKR number was significantly reduced compared to the 0.02 cpd drum (p<0.0001), 0.06 and 0.2 cpd drums elicited equivalent responses with both set-ups. Notably, standard OKRs varied with time of day; peak responses of 29.8±7 saccades per minute occurred in the early afternoon with significantly reduced responses occurring in the early morning or late afternoon (18.5±3 and 18.4±4.5 saccades per minute, respectively). A customised series of 2D printed drums enabled analysis of VA and CS in 5-21 dpf zebrafish. The saccadic frequency in VA assays was inversely proportional to age and spatial frequency and in CS assays was inversely proportional to age and directly proportional to contrast of the stimulus. OKR, VA and CS of zebrafish larvae can be efficiently measured using 2D- or 3D-printed striped drums. For data consistency the luminance of the OKR light source, the time of day when the analysis is performed, and the order of presentation of VA and CS drums must be considered. These simple methods allow effective and more sensitive analysis of functional vision in zebrafish.


Author(s):  
Olha Kozina ◽  
Volodymyr Panchenko ◽  
Oleksandr Rysovanyi

Multi-cloud middleware must perform many different resource management, control, and monitoring functions that must interoperate but may differ in implementation in each cloud service provider. A mechanism for monotonic recording model implementation for multi-cloud systems with a geo-distributed middleware architecture is proposed in the article. It is shown, the middleware modules location defines the algorithm of synchronization of start moments of adjusting intervals required to generating the global sequence numbers for customer's data recording into the databases of multi-cloud systems. Figs.: 2. Tabl.: 1. Refs.: 10 titles. Keywords: middleware architecture, geo-distributed middleware architecture, multicloud systems.


2021 ◽  
Author(s):  
Osvaldo de Goes Bay Junior ◽  
Cícera Renata Diniz Vieira Silva ◽  
Cláudia S Martiniano ◽  
Monique da Silva Lopes ◽  
Lygia Maria de Figueiredo Melo ◽  
...  

BACKGROUND The increased applicability of information technology for evaluating health policies, programs, and care requires advancements in understanding trends, influences, its use by evaluators, and the implications for quality standards of evaluation. OBJECTIVE This study aimed to assess the applicability of information technology in evaluation the Access and quality of primary health care in Brazil considering international quality standards. METHODS We conducted a qualitative case study during the External Evaluation of Brazil’s National Program for Improving Primary Care Access and Quality. Data collection consisted of interviews, focus groups, and document analysis. Seven technicians from the Ministry of Health and 47 researchers from various high education and research institutions across the country participated in the study. Data were categorized using the software Atlas.ti, according to the quality standards of the Joint Committee on Standards for Education Evaluation, followed by Bardin’s content analysis. RESULTS Results related to feasibility, thematic scope, field activity management, standardized data collection, data consistency, and transparency, demonstrate improvements and opportunities for advancements in evaluation mediated by the use of Information Technology, favoring the emergence of new practices and remodeling of existing ones, taking into account the multiple components required by the complex assessment of access and quality in primary health care. Difficulty in operating, inoperative system, and lack of investment in equipment and human resources are challenges to increase the effectiveness of information technology in evaluation. CONCLUSIONS The strategic and intelligent use of information technology offered evaluators a greater opportunity to stakeholder engagement, to insert different organizational, operational, and methodological components, capable of triggering influences and confluences, with connections in collaborative and synergistic networks to increase the quality and allow the development of a more consistent and efficient evaluation with greater possibility of incorporating the results into public health policies.


2021 ◽  
Vol 6 (2) ◽  
pp. 3-12
Author(s):  
Mike Taylor

From its earliest inception, FOLIO was conceived not as an ILS (Integrated Library System), but as a true Services Platform, composed of many independent but interdependent modules, and forming a foundation on which an ILS or other library software could be built out of relevant modules. This vision of modularity is crucial to FOLIO’s appeal to the library community, because it lowers the bar to participation: individual libraries may create modules that meet their needs, or hire developers to do so, or contribute to funding modules that will be of use to a broader community — all without needing “permission” from a central authority. The technical design of FOLIO is deeply influenced by the requirements of modularity, with the establishment of standard specifications and an emphasis on machine-readable API descriptions. While FOLIO’s modular design has proved advantageous, it also introduces difficulties, including cross-module searching and data consistency. Some conventions have been established to address these difficulties, and others are in the process of crystallizing. As the ILS built on FOLIO’s platform grows and matures, and as other application suites are built on it, it remains crucial to resist the shortcuts that monolithic systems can benefit from, and retain the vision of modularity that has so successfully brought FOLIO this far.


2021 ◽  
pp. 157-165
Author(s):  
Anatoliy Gorbenko ◽  
Andrii Karpenko ◽  
Olga Tarasyuk

A concept of distributed replicated NoSQL data storages Cassandra-like, HBase, MongoDB has been proposed to effectively manage Big Data set whose volume, velocity and variability are difficult to deal with by using the traditional Relational Database Management Systems. Tradeoffs between consistency, availability, partition tolerance and latency is intrinsic to such systems. Although relations between these properties have been previously identified by the well-known CAP and PACELC theorems in qualitative terms, it is still necessary to quantify how different consistency settings, deployment patterns and other properties affect system performance.This experience report analysis performance of the Cassandra NoSQL database cluster and studies the tradeoff between data consistency guaranties and performance in distributed data storages. The primary focus is on investigating the quantitative interplay between Cassandra response time, throughput and its consistency settings considering different single- and multi-region deployment scenarios. The study uses the YCSB benchmarking framework and reports the results of the read and write performance tests of the three-replicated Cassandra cluster deployed in the Amazon AWS. In this paper, we also put forward a notation which can be used to formally describe distributed deployment of Cassandra cluster and its nodes relative to each other and to a client application. We present quantitative results showing how different consistency settings and deployment patterns affect Cassandra performance under different workloads. In particular, our experiments show that strong consistency costs up to 22 % of performance in case of the centralized Cassandra cluster deployment and can cause a 600 % increase in the read/write requests if Cassandra replicas and its clients are globally distributed across different AWS Regions.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Thomas Voglhuber-Brunnmaier ◽  
Alexander O. Niedermayer ◽  
Bernhard Jakoby

Abstract Two main topics are presented in this work which enable more efficient use of oil condition monitoring systems based on resonant fluid sensing. A new fluid model for a recently introduced compact measurement unit for oil condition monitoring based on simultaneous measurement of viscosity and density is discussed. It is shown that a new fluid model allows achieving higher accuracies, which is demonstrated by comparison to earlier models. The second topic deals with measuring fluid parameters over varying temperatures and thus providing additional monitoring parameters and enhanced data consistency. We propose an alternative representation of the Vogel model using transformed parameters having a clear physical meaning and which are more stable in presence of measurement noise.


2021 ◽  
Vol 2094 (3) ◽  
pp. 032001
Author(s):  
A L Zolkin ◽  
V D Munister ◽  
O Yu Bogaevskaya ◽  
A V Yumashev ◽  
A N Kornetov

Abstract The article deals with the problems of functioning and improvement of measuring and control devices in the medical industry. The classification, principles of organization of specialized, multifunctional and single-functional medical instrument and computer systems are considered. A model for solving the problem of scalability and availability through the use of non-relational data management systems that combine methods for achieving atomicity and data consistency is proposed. The promising diagnostic methods in medicine are described. The importance of a prompt assessment of the patient’s condition is emphasized. The optimization and modernization of the digital radiography using modern machine learning algorithms are given and justified.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-22
Author(s):  
Wei-Ming Chen ◽  
Tei-Wei Kuo ◽  
Pi-Cheng Hsiu

Intermittent systems enable batteryless devices to operate through energy harvesting by leveraging the complementary characteristics of volatile (VM) and non-volatile memory (NVM). Unfortunately, alternate and frequent accesses to heterogeneous memories for accumulative execution across power cycles can significantly hinder computation progress. The progress impediment is mainly due to more CPU time being wasted for slow NVM accesses than for fast VM accesses. This paper explores how to leverage heterogeneous cores to mitigate the progress impediment caused by heterogeneous memories. In particular, a delegable and adaptive synchronization protocol is proposed to allow memory accesses to be delegated between cores and to dynamically adapt to diverse memory access latency. Moreover, our design guarantees task serializability across multiple cores and maintains data consistency despite frequent power failures. We integrated our design into FreeRTOS running on a Cypress device featuring heterogeneous dual cores and hybrid memories. Experimental results show that, compared to recent approaches that assume single-core intermittent systems, our design can improve computation progress at least 1.8x and even up to 33.9x by leveraging core heterogeneity.


Sign in / Sign up

Export Citation Format

Share Document