scholarly journals A Connection Access Mechanism of Distributed Network based on Block Chain

Author(s):  
Xianfei Zhou ◽  
Hongfang Cheng ◽  
Fulong Chen

Cross-border payment optimization technology based on block chain has become a hot spot in the industry. The traditional method mainly includes the block feature detection method, the fuzzy access method, the adaptive scheduling method, which perform related feature extraction and quantitative regression analysis on the collected distributed network connection access data, and combine the fuzzy clustering method to optimize the data access design, and realize the group detection and identification of data in the block chain. However, the traditional method has a large computational overhead for distributed network connection access, and the packet detection capability is not good. This paper constructs a statistical sequence model of adaptive connection access data to extract the descriptive statistical features of the distributed network block chain adaptive connection access data similarity. The performance of the strategy retrieval efficiency in the experiment is tested based on the strategy management method. The experiment performs matching query tests on the test sets of different query sizes. The different parameters for error rate and search delay test are set to evaluate the impact of different parameters on retrieval performance. The calculation method of single delay is the total delay or the total number of matches. The optimization effect is mainly measured by the retrieval delay of the strategy in the strategy management contract; the smaller the delay, the higher the execution efficiency, and the better the retrieval optimization effect.

2019 ◽  
Author(s):  
George Alter ◽  
Alejandra Gonzalez-Beltran ◽  
Lucila Ohno-Machado ◽  
Philippe Rocca-Serra

AbstractThis article presents elements in the Data Tags Suite (DATS) metadata schema describing data access, data use conditions, and consent information. DATS is a product of the bioCADDIE Project, which created a data discovery index for searching across all types of biomedical data. The “access and use” metadata items in DATS are designed from the perspective of a researcher who wants to find and re-use existing data. Data reuse is often controlled to protect the privacy of subjects and patients. We focus on the impact of data protection procedures on data users. However, these procedures are part of a larger environment around patient privacy protection, and this article puts DATS metadata into the context of the administrative, legal, and technical systems used to protect confidential data.


2021 ◽  
Vol 10 (2) ◽  
pp. 34
Author(s):  
Alessio Botta ◽  
Jonathan Cacace ◽  
Riccardo De Vivo ◽  
Bruno Siciliano ◽  
Giorgio Ventre

With the advances in networking technologies, robots can use the almost unlimited resources of large data centers, overcoming the severe limitations imposed by onboard resources: this is the vision of Cloud Robotics. In this context, we present DewROS, a framework based on the Robot Operating System (ROS) which embodies the three-layer, Dew-Robotics architecture, where computation and storage can be distributed among the robot, the network devices close to it, and the Cloud. After presenting the design and implementation of DewROS, we show its application in a real use-case called SHERPA, which foresees a mixed ground and aerial robotic platform for search and rescue in an alpine environment. We used DewROS to analyze the video acquired by the drones in the Cloud and quickly spot signs of human beings in danger. We perform a wide experimental evaluation using different network technologies and Cloud services from Google and Amazon. We evaluated the impact of several variables on the performance of the system. Our results show that, for example, the video length has a minimal impact on the response time with respect to the video size. In addition, we show that the response time depends on the Round Trip Time (RTT) of the network connection when the video is already loaded into the Cloud provider side. Finally, we present a model of the annotation time that considers the RTT of the connection used to reach the Cloud, discussing results and insights into how to improve current Cloud Robotics applications.


2020 ◽  
Vol 245 ◽  
pp. 04024
Author(s):  
Daniele Spiga ◽  
Diego Ciangottini ◽  
Mirco Tracolli ◽  
Tommaso Tedeschi ◽  
Daniele Cesini ◽  
...  

The projected Storage and Compute needs for the HL-LHC will be a factor up to 10 above what can be achieved by the evolution of current technology within a flat budget. The WLCG community is studying possible technical solutions to evolve the current computing in order to cope with the requirements; one of the main focus is resource optimization, with the ultimate aim of improving performance and efficiency, as well as simplifying and reducing operation costs. As of today the storage consolidation based on a Data Lake model is considered a good candidate for addressing HL-LHC data access challenges. The Data Lake model under evaluation can be seen as a logical system that hosts a distributed working set of analysis data. Compute power can be “close” to the lake, but also remote and thus completely external. In this context we expect data caching to play a central role as a technical solution to reduce the impact of latency and reduce network load. A geographically distributed caching layer will be functional to many satellite computing centers that might appear and disappear dynamically. In this talk we propose a system of caches, distributed at national level, describing both deployment and results of the studies made to measure the impact on the CPU efficiency. In this contribution, we also present the early results on novel caching strategy beyond the standard XRootD approach whose results will be a baseline for an AI-based smart caching system.


Author(s):  
Jon Hael Simon Brenas ◽  
Mohammad S. Al-Manir ◽  
Kate Zinszer ◽  
Christopher J. Baker ◽  
Arash Shaban-Nejad

ObjectiveMalaria is one of the top causes of death in Africa and some other regions in the world. Data driven surveillance activities are essential for enabling the timely interventions to alleviate the impact of the disease and eventually eliminate malaria. Improving the interoperability of data sources through the use of shared semantics is a key consideration when designing surveillance systems, which must be robust in the face of dynamic changes to one or more components of a distributed infrastructure. Here we introduce a semantic framework to improve interoperability of malaria surveillance systems (SIEMA).IntroductionIn 2015, there were 212 million new cases of malaria, and about 429,000 malaria death, worldwide. African countries accounted for almost 90% of global cases of malaria and 92% of malaria deaths. Currently, malaria data are scattered across different countries, laboratories, and organizations in different heterogeneous data formats and repositories. The diversity of access methodologies makes it difficult to retrieve relevant data in a timely manner. Moreover, lack of rich metadata limits the reusability of data and its integration. The current process of discovering, accessing and reusing the data is inefficient and error-prone profoundly hindering surveillance efforts.As our knowledge about malaria and appropriate preventive measures becomes more comprehensive malaria data management systems, data collection standards, and data stewardship are certain to change regularly. Collectively these changes will make it more difficult to perform accurate data analytics or achieve reliable estimates of important metrics, such as infection rates. Consequently, there is a critical need to rapidly re-assess the integrity of data and knowledge infrastructures that experts depend on to support their surveillance tasks.MethodsIn order to address the challenge of heterogeneity of malaria data sources we recruit domain specific ontologies in the field (e.g. IDOMAL (1)) that define a shared lexicon of concepts and relations. These ontologies are expressed in the standard Web Ontology Language (OWL).To over come challenges in accessing distributed data resources we have adopted the Semantic Automatic Discovery & Integration framework (SADI) (2) to ensure interoperability. SADI provides a way to describe services that provide access to data, detailing inputs and outputs of services and a functional description. Existing ontology terms are used when building SADI Service descriptions. The services can be discovered by querying a registry and combined into complex workflows. Users can issue SPARQL syntax to a query engine which can plan complex workflows to fetch actual data, without having to know how target data is structured or where it is located.In order to tackle changes in target data sources, the ontologies or the service definitions, we create a Dashboard (3) that can report any changes. The Dashboard reuses some existing tools to perform a series of checks. These tools compare versions of ontologies and databases allowing the Dashboard to report these changes. Once a change has been identified, as series of recommendations can be made, e.g. services can be retired or updated so that data access can continue.ResultsWe used the Mosquito Insecticide Resistance Ontology (MIRO) (5) to define the common lexicon for our data sources and queries. The sources we created are CSV files that use the IRbase (4) schema. With the data defined using we specified several SPARQL queries and the SADI services needed to answer them. These services were designed to enabled access to the data separated in different files using different formats. In order to showcase the capabilities of our Dashboard, we also modified parts of the service definitions, of the ontology and of the data sources. This allowed us to test our change detection capabilities. Once changes where detected, we manually updated the services to comply with a revised ontology and data sources and checked that the changes we proposed where yielding services that gave the right answers. In the future, we plan to make the updating of the services automatic.ConclusionsBeing able to make the relevant information accessible to a surveillance expert in a seamless way is critical in tackling and ultimately curing malaria. In order to achieve this, we used existing ontologies and semantic web services to increase the interoperability of the various sources. The data as well as the ontologies being likely to change frequently, we also designed a tool allowing us to detect and identify the changes and to update the services so that the whole surveillance systems becomes more resilient.References1. P. Topalis, E. Mitraka, V Dritsou, E. Dialynas and C. Louis, “IDOMAL: the malaria ontology revisited” in Journal of Biomedical Semantics, vol. 4, no. 1, p. 16, Sep 2013.2. M. D. Wilkinson, B. Vandervalk and L. McCarthy, “The Semantic Automated Discovery and Integration (SADI) web service design-pattern, API and reference implementation” in Journal of Biomedical Semantics, vol. 2, no. 1, p. 8, 2011.3. J.H. Brenas, M.S. Al-Manir, C.J.O. Baker and A. Shaban-Nejad, “Change management dashboard for the SIEMA global surveillance infrastructure”, in International Semantic Web Conference, 20174. E. Dialynas, P. Topalis, J. Vontas and C. Louis, "MIRO and IRbase: IT Tools for the Epidemiological Monitoring of Insecticide Resistance in Mosquito Disease Vectors", in PLOS Neglected Tropical Diseases 2009


1998 ◽  
Vol 14 (suppl 3) ◽  
pp. S117-S123 ◽  
Author(s):  
Anaclaudia Gastal Fassa ◽  
Luiz Augusto Facchini ◽  
Marinel Mór Dall'Agnol

The International Agency for Research on Cancer (IARC) proposed this international historical cohort study trying to solve the controversy about the increased risk of cancer in the workers of the Pulp and Paper Industry. One of the most important aspects presented by this study in Brazil was the strategies used to overcome the methodological challenges, such as: data access, data accuracy, data availability, multiple data sources, and the large follow-up period. Through multiple strategies it was possible to build a Brazilian cohort of 3,622 workers, to follow them with a 93 percent success rate and to identify in 99 percent of the cases the cause of death. This paper, has evaluated the data access, data accuracy and the effectiveness of the strategies used and the different sources of data.


2020 ◽  
Vol 245 ◽  
pp. 04027
Author(s):  
X. Espinal ◽  
S. Jezequel ◽  
M. Schulz ◽  
A. Sciabà ◽  
I. Vukotic ◽  
...  

HL-LHC will confront the WLCG community with enormous data storage, management and access challenges. These are as much technical as economical. In the WLCG-DOMA Access working group, members of the experiments and site managers have explored different models for data access and storage strategies to reduce cost and complexity, taking into account the boundary conditions given by our community.Several of these scenarios have been evaluated quantitatively, such as the Data Lake model and incremental improvements of the current computing model with respect to resource needs, costs and operational complexity.To better understand these models in depth, analysis of traces of current data accesses and simulations of the impact of new concepts have been carried out. In parallel, evaluations of the required technologies took place. These were done in testbed and production environments at small and large scale.We will give an overview of the activities and results of the working group, describe the models and summarise the results of the technology evaluation focusing on the impact of storage consolidation in the form of Data Lakes, where the use of streaming caches has emerged as a successful approach to reduce the impact of latency and bandwidth limitation.We will describe the experience and evaluation of these approaches in different environments and usage scenarios. In addition we will present the results of the analysis and modelling efforts based on data access traces of the experiments.


2014 ◽  
Vol 513 (4) ◽  
pp. 042044 ◽  
Author(s):  
L A T Bauerdick ◽  
K Bloom ◽  
B Bockelman ◽  
D C Bradley ◽  
S Dasu ◽  
...  

This chapter discusses emerging issues and technologies, such as ethical responsibilities in a changing technological environment, the use of analytics and artificial intelligence, the evolution of communications technology, and the growth of block chain and mobile apps technology. Mobile apps technology is a very exciting development because the nature of the applications is very personal and target specific customer needs, hence gradually resolving the issue of explicitly knowing the customer and meeting its personal needs through the concept of personas. This chapter provides numerous examples of how the various technological developments can be specifically implemented to enhance public service delivery in the digital era. In this context, Chapter 12 has two important implications, namely the impact of the technology trends to revolutionise public service delivery on the operations of government entities and users of government services in the future.


Author(s):  
Eddy L. Borges-Rey

This chapter explores the challenges that emerge from a narrow understanding of the principles underpinning Big data, framed in the context of the teaching and learning of Science and Mathematics. This study considers the materiality of computerised data and examines how notions of data access, data sampling, data sense-making and data collection are nowadays contested by datafied public and private bodies, hindering the capacity of citizens to effectively understand and make better use of the data they generate or engage with. The study offers insights from secondary and documentary research and its results suggest that understanding data in less constraining terms, namely: a) as capable of secondary agency, b) as the vital fluid of societal institutions, c) as gathered or accessed by new data brokers and through new technologies and techniques, and d) as mediated by the constant interplay between public and corporate spheres and philosophies, could greatly enhance the teaching and learning of Science and Mathematics in the framework of current efforts to advance data literacy.


Energies ◽  
2019 ◽  
Vol 13 (1) ◽  
pp. 140 ◽  
Author(s):  
Ahmed Alzahrani ◽  
Hussain Alharthi ◽  
Muhammad Khalid

The problems associated with the deployment of intermittent, unpredictable and uncontrollable solar photovoltaics (PV) can be feasibly solved with battery energy storage systems (BESS), particularly in terms of optimizing the available capacity, increasing reliability and reducing system losses. Consequently, the degree of importance of BESS increases in proportion to the level of PV penetration. Nevertheless, the respective high cost of BESS imposes a huge concern and the need to establish a techno-economic solution. In this paper, we investigate the system losses and power quality issues associated with the high deployment of PV in a grid network and hence formulate BESS capacity optimization and placement methodology based on a genetic algorithm. The concept of the proposed methodology has been tested and validated on a standard IEEE 33 bus system. A brief stepwise analysis is presented to demonstrate the effectiveness and robustness of the proposed methodology in reducing the incremental system losses experienced with increased PV penetration. Furthermore, based on the proposed optimization objectives, a comparative study has also been performed to quantify the impact and effectiveness of aggregated and distributed placement of BESS. The results obtained exhibit a substantial reduction in system losses, particularly in the case of distributed BESS placement.


Sign in / Sign up

Export Citation Format

Share Document