Web Data Exchange Schema Mapping Optimization Method

2020 ◽  
Vol 10 (01) ◽  
pp. 76-89
Author(s):  
宇航 纪
Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2468
Author(s):  
Ri Lin ◽  
Feng Zhang ◽  
Dejun Li ◽  
Mingwei Lin ◽  
Gengli Zhou ◽  
...  

Docking technology for autonomous underwater vehicles (AUVs) involves energy supply, data exchange and navigation, and plays an important role to extend the endurance of the AUVs. The navigation method used in the transition between AUV homing and docking influences subsequent tasks. How to improve the accuracy of the navigation in this stage is important. However, when using ultra-short baseline (USBL), outliers and slow localization updating rates could possibly cause localization errors. Optical navigation methods using underwater lights and cameras are easily affected by the ambient light. All these may reduce the rate of successful docking. In this paper, research on an improved localization method based on multi-sensor information fusion is carried out. To improve the localization performance of AUVs under motion mutation and light variation conditions, an improved underwater simultaneous localization and mapping algorithm based on ORB features (IU-ORBSALM) is proposed. A nonlinear optimization method is proposed to optimize the scale of monocular visual odometry in IU-ORBSLAM and the AUV pose. Localization tests and five docking missions are executed in a swimming pool. The localization results indicate that the localization accuracy and update rate are both improved. The 100% successful docking rate achieved verifies the feasibility of the proposed localization method.


Author(s):  
Nabil A. Alrajeh ◽  
Badr Elmir ◽  
Bouchaïb Bounabat ◽  
Norelislam El Hami

AbstractInteroperability is one of the most challenging concerns that face healthcare information system (HIS) actors. Interoperability implementation in this context may be a data exchange interfacing, a service oriented interaction or even a composition of new composite healthcare processes. In fact, optimizing efforts of interoperability achievement is a key requirement to effectively setup, develop and evolve intra- and interorganizational collaboration. To ensure interoperability project effectiveness, this paper proposes a modeling representation of health processes interoperability evolution. Interoperability degrees of involved automated processes are assessed using a ratio metric, taking into account all significant aspects, such as potentiality, compatibility and operational performance. Then, a particle swarm optimization algorithm (PSO) is used as a heuristic optimization method to find the best distribution of effort needed to establish an efficient healthcare collaborative network.


2018 ◽  
Vol 3 (2) ◽  
pp. 162
Author(s):  
Slamet Sudaryanto Nurhendratno ◽  
Sudaryanto Sudaryanto

 Data integration is an important step in integrating information from multiple sources. The problem is how to find and combine data from scattered data sources that are heterogeneous and have semantically informant interconnections optimally. The heterogeneity of data sources is the result of a number of factors, including storing databases in different formats, using different software and hardware for database storage systems, designing in different data semantic models (Katsis & Papakonstantiou, 2009, Ziegler & Dittrich , 2004). Nowadays there are two approaches in doing data integration that is Global as View (GAV) and Local as View (LAV), but both have different advantages and limitations so that proper analysis is needed in its application. Some of the major factors to be considered in making efficient and effective data integration of heterogeneous data sources are the understanding of the type and structure of the source data (source schema). Another factor to consider is also the view type of integration result (target schema). The results of the integration can be displayed into one type of global view or a variety of other views. So in integrating data whose source is structured the approach will be different from the integration of the data if the data source is not structured or semi-structured. Scheme mapping is a specific declaration that describes the relationship between the source scheme and the target scheme. In the scheme mapping is expressed in in some logical formulas that can help applications in data interoperability, data exchange and data integration. In this paper, in the case of establishing a patient referral center data center, it requires integration of data whose source is derived from a number of different health facilities, it is necessary to design a schema mapping system (to support optimization). Data Center as the target orientation schema (target schema) from various reference service units as a source schema (source schema) has the characterization and nature of data that is structured and independence. So that the source of data can be integrated tersetruktur of the data source into an integrated view (as a data center) with an equivalent query rewriting (equivalent). The data center as a global schema serves as a schema target requires a "mediator" that serves "guides" to maintain global schemes and map (mapping) between global and local schemes. Data center as from Global As View (GAV) here tends to be single and unified view so to be effective in its integration process with various sources of schema which is needed integration facilities "integration". The "Pemadu" facility is a declarative mapping language that allows to specifically link each of the various schema sources to the data center. So that type of query rewriting equivalent is suitable to be applied in the context of query optimization and maintenance of physical data independence.Keywords: Global as View (GAV), Local as View (LAV), source schema ,mapping schema


2011 ◽  
Vol 403-408 ◽  
pp. 1062-1067 ◽  
Author(s):  
Payalpreet Kaur ◽  
Raghu Garg ◽  
Ravinder Singh ◽  
Mandeep Singh

Web data mining is a field that has gained popularity in the recent time with the advancement in web mining technologies. Web data mining is the extraction of data on web. The term Web Data Mining is a technique used to crawl through various web resources to collect required information, which enables an individual or a company to promote business, understanding marketing dynamics, new promotions floating on the Internet, etc. The data on web is unstructured, irregular and lacks a fixed unified pattern as it is presented in HTML format that represents data in the presentation format and is unable to handle semi-structured or unstructured data . These difficulties lead to the emergence of XML based web data mining. XML was created so that richly structured documents could be used over the web.XML provides a standard for the data exchange and data storage .This paper presents a web data mining model based on XML. In this model first of all unstructured data is transformed to XML and then XML document is stored in database in the form of the string tree, then specific records are searched using a LINQ query. If record does not exist in the database then check the updates of specific website and repeat the same steps. At last data selected by LINQ Query is displayed on web browser. The feature that helped to increase the speed of data extraction and that also reduces the time of extraction is the presence of database that stores the data that have been extracted earlier by a user and can be used by other users by passing a LINQ query .In this model there is no need to create an extra separate XSL file because this model stores xml document in the database in the form of the string tree. This model is implemented using C# with XML.


Water ◽  
2021 ◽  
Vol 13 (23) ◽  
pp. 3440
Author(s):  
Zhouyang Peng ◽  
Xi Jin ◽  
Wenjiao Sang ◽  
Xiangling Zhang

The interception facility is an important and frequently used measure for combined sewer overflow (CSO) control in city-scale drainage systems. The location and capacity of these facilities affects the pollution control efficiency and construction cost. Optimal design of these facilities is always an active research area in environmental engineering, and among candidate optimization methods, the simulation-optimization method is the most attractive method. However, time-consuming simulations of complex drainage system models (e.g., SWMM) make the simulation-optimization approach impractical. This paper proposes a new simulation-optimization method with new features of multithreading individual evaluation and fast data exchange by recoding SWMM with object-oriented programming. These new features extremely accelerate the optimization process. The non-dominated sorting genetic algorithm-III (NSGA-III) is selected as the optimization framework for better performance in dealing with multi-objective optimization. The proposed method is used in the optimal design of a terminal CSO interception facility in Wuhan, China. Compared with empirically designed schemes, the optimized schemes can achieve better pollution control efficiency with less construction cost. Additionally, the time consumption of the optimization process is compressed from days to hours, making the proposed method practical.


Author(s):  
Ronald Fagin ◽  
Laura M. Haas ◽  
Mauricio Hernández ◽  
Renée J. Miller ◽  
Lucian Popa ◽  
...  
Keyword(s):  

2012 ◽  
Vol 54 (3) ◽  
pp. 105-113 ◽  
Author(s):  
Giansalvatore Mecca ◽  
Paolo Papotti

Author(s):  
Huiping Cao ◽  
Yan Qi ◽  
K. Selçuk Candan ◽  
Maria Luisa Sapino

Many applications require exchange and integration of data from multiple, heterogeneous sources. eXtensible Markup Language (XML) is a standard developed to satisfy the convenient data exchange needs of these applications. However, XML by itself does not address the data integration requirements. This chapter discusses the challenges and techniques in XML Data Integration. It first presents a four step outline, illustrating the steps involved in the integration of XML data. This chapter, then, focuses on the first two of these steps: schema extraction and data/schema mapping. More specifically, schema extraction presents techniques to extract tree summaries, DTDs, or XML Schemas from XML documents. The discussion on data/schema mapping focuses on techniques for aligning XML data and schemas.


Sign in / Sign up

Export Citation Format

Share Document