scholarly journals A Comparative Analysis of Semantic Web Databases Based on Scalability and Performance

Research is rapidly increasing day by day that taken too much efforts in exploring some interesting and some related publications over the internet.as we already know that every data bases have a different architecture that varies the performance in terms of storage architecture and medium. In this research paper we analyzed of two main big data types of Semantic web that iscategorized into two types (i) in memory Native (ii) Non-native Non-memory which are disk reside and Non-native is used for services management for instance, SQL, MySQL, and another is Oracle that is just used for storing purpose. Data bases is very important model specially, when any model come into existence. For instance, when we offer for storing purpose of that data then where it should have o store and then definitely it must be access efficiently. The proposed methodology consist test case for data retrieving and query optimization method to analyze performance of databases. When we talk about access data bases from any source then we query them for accessing. LUMB (Lehigh University Benchmark) is being used for testing performance and it cannot be used for storing data. Semantic Web Data (SWD) give a capability in such a way if anybody want to access / encode related data then it can be retrieved efficiently. Our main objective of research we have compared two types of SWD Native store and Non-nativestore and then we analyzed them

2009 ◽  
pp. 23-45 ◽  
Author(s):  
A. Radygin

The article deals with key tendencies in the development of Russia’s market of mergers and acquisitions in the first decade of the 21st century. Quantitative parameters are analyzed by using available in the open access data bases for the years 2003-2008 taking into consideration new tendencies relating to 2008 financial crisis. An active role of the state played in the market of corporate control represents an important factor. Special attention is given to issues of development of Russia’s system of legal norms regulating the market of mergers and acquisitions.


2020 ◽  
Vol 15 ◽  
Author(s):  
Deeksha Saxena ◽  
Mohammed Haris Siddiqui ◽  
Rajnish Kumar

Background: Deep learning (DL) is an Artificial neural network-driven framework with multiple levels of representation for which non-linear modules combined in such a way that the levels of representation can be enhanced from lower to a much abstract level. Though DL is used widely in almost every field, it has largely brought a breakthrough in biological sciences as it is used in disease diagnosis and clinical trials. DL can be clubbed with machine learning, but at times both are used individually as well. DL seems to be a better platform than machine learning as the former does not require an intermediate feature extraction and works well with larger datasets. DL is one of the most discussed fields among the scientists and researchers these days for diagnosing and solving various biological problems. However, deep learning models need some improvisation and experimental validations to be more productive. Objective: To review the available DL models and datasets that are used in disease diagnosis. Methods: Available DL models and their applications in disease diagnosis were reviewed discussed and tabulated. Types of datasets and some of the popular disease related data sources for DL were highlighted. Results: We have analyzed the frequently used DL methods, data types and discussed some of the recent deep learning models used for solving different biological problems. Conclusion: The review presents useful insights about DL methods, data types, selection of DL models for the disease diagnosis.


Author(s):  
Kersten Schuster ◽  
Philip Trettner ◽  
Leif Kobbelt

We present a numerical optimization method to find highly efficient (sparse) approximations for convolutional image filters. Using a modified parallel tempering approach, we solve a constrained optimization that maximizes approximation quality while strictly staying within a user-prescribed performance budget. The results are multi-pass filters where each pass computes a weighted sum of bilinearly interpolated sparse image samples, exploiting hardware acceleration on the GPU. We systematically decompose the target filter into a series of sparse convolutions, trying to find good trade-offs between approximation quality and performance. Since our sparse filters are linear and translation-invariant, they do not exhibit the aliasing and temporal coherence issues that often appear in filters working on image pyramids. We show several applications, ranging from simple Gaussian or box blurs to the emulation of sophisticated Bokeh effects with user-provided masks. Our filters achieve high performance as well as high quality, often providing significant speed-up at acceptable quality even for separable filters. The optimized filters can be baked into shaders and used as a drop-in replacement for filtering tasks in image processing or rendering pipelines.


2021 ◽  
Vol 13 (12) ◽  
pp. 2342
Author(s):  
Jin-Bong Sung ◽  
Sung-Yong Hong

A new method to design in-orbit synthetic aperture radar operational parameters has been implemented for the Korean Multi-purpose Satellite 6 mission. The implemented method optimizes the pulse repetition frequency when a satellite altitude changes from its nominal one, so it has the advantage that the synthetic aperture radar performances can satisfy the requirements for the in-orbit operation. Other commanding parameters have been designed to conduct trade-off between those parameters. This paper presents the new optimization method to maintain the synthetic aperture radar performances even in the case of an altitude variation. Design methodologies to determine operational parameters, respectively, at nominal altitude and in orbit are presented. In addition, numerical simulation is presented to validate the proposed optimization and the design methodologies.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Zhao Wu ◽  
Naixue Xiong ◽  
Yannong Huang ◽  
Qiong Gu ◽  
Chunyang Hu ◽  
...  

At present the cloud computing is one of the newest trends of distributed computation, which is propelling another important revolution of software industry. The cloud services composition is one of the key techniques in software development. The optimization for reliability and performance of cloud services composition application, which is a typical stochastic optimization problem, is confronted with severe challenges due to its randomness and long transaction, as well as the characteristics of the cloud computing resources such as openness and dynamic. The traditional reliability and performance optimization techniques, for example, Markov model and state space analysis and so forth, have some defects such as being too time consuming and easy to cause state space explosion and unsatisfied the assumptions of component execution independence. To overcome these defects, we propose a fast optimization method for reliability and performance of cloud services composition application based on universal generating function and genetic algorithm in this paper. At first, a reliability and performance model for cloud service composition application based on the multiple state system theory is presented. Then the reliability and performance definition based on universal generating function is proposed. Based on this, a fast reliability and performance optimization algorithm is presented. In the end, the illustrative examples are given.


2021 ◽  
Vol 27 (4) ◽  
pp. 387-412
Author(s):  
Marcelo Aires Vieira ◽  
Elivaldo Lozer Fracalossi Ribeiro ◽  
Daniela Barreiro Claro ◽  
Babacar Mane

With the growth of cloud services, many companies have begun to persist and make their data available through services such as Data as a Service (DaaS) and Database as a Service (DBaaS). The DaaS model provides on-demand data through an Application Programming Inter- face (API), while DBaaS model provides on-demand database management systems. Different data sources require efforts to integrate data from different models. These model types include unstructured, semi-structured, and structured data. Heterogeneity from DaaS and DBaaS makes it challenging to integrate data from different services. In response to this problem, we developed the Data Join (DJ) method to integrate heterogeneous DaaS and DBaaS sources. DJ was described through canonical models and incorporated into a middleware as a proof-of-concept. A test case and three experiments were performed to validate our DJ method: the first experiment tackles data from DaaS and DBaaS in isolation; the second experiment associates data from different DaaS and DBaaS through one join clause; and the third experiment integrates data from three sources (one DaaS and two DBaaS) based on different data type (relational, NoSQL, and NewSQL) through two join clauses. Our experiments evaluated the viability, functionality, integration, and performance of the DJ method. Results demonstrate that DJ method outperforms most of the related work on selecting and integrating data in a cloud environment.


2019 ◽  
Vol 8 (2S8) ◽  
pp. 1463-1468

Software program optimization for improved execution speed can be achieved through modifying the program. Programs are usually written in high level languages then translated into low level assembly language. More coverage of optimization and performance analysis can be performed on low level than high level language. Optimization improvement is measured in the difference in program execution performance. Several methods are available for measuring program performance are classified into static approaches and dynamic approaches. This paper presents an alternative method of more accurately measuring code performance statically than commonly used code analysis metrics. New metrics proposed are designed to expose effectiveness of optimization performed on code, specifically unroll optimizations. An optimization method, loop unroll is used to demonstrate the effectiveness of the increased accuracy of the proposed metric. The results of the study show that measuring Instructions Performed and Instruction Latency is a more accurate static metric than Instruction Count and subsequently those based on it.


2018 ◽  
Vol 6 (11) ◽  
pp. 254-265
Author(s):  
Damitha D Karunaratna

Relational Databases are typically created to fulfil the information requirements of a community of users generally belongs to a single organization. Data stored in these databases were typically accessed by using Structured Query Languages or through customized interfaces.  With the popularity of the World Wide Web and the availability of large number of Relational Databases for public access there is a need for users to retrieve data from these databases by using a text-based queries, possibly by using the terms that they are familiar with. However, the inherent limitations of Structured Query Languages used to create and access data in relational Data Bases does not allow uses to access data by using text-based queries. Also, the terms used in queries should be limited to those used during the construction of the databases. This paper proposes an architecture to generated ontologies over relation databases and show how they could be enhanced semantically by using available domain-specific or top-level ontologies so that the data managed by the DBs can be accessed by using text-based queries. The feasibility of the proposed architecture was demonstrated by building a prototype system over a sample MySQL database.


Author(s):  
Floriano Scioscia ◽  
Michele Ruta ◽  
Giuseppe Loseto ◽  
Filippo Gramegna ◽  
Saverio Ieva ◽  
...  

The Semantic Web of Things (SWoT) aims to support smart semantics-enabled applications and services in pervasive contexts. Due to architectural and performance issues, most Semantic Web reasoners are often impractical to be ported: they are resource consuming and are basically designed for standard inference tasks on large ontologies. On the contrary, SWoT use cases generally require quick decision support through semantic matchmaking in resource-constrained environments. This paper describes Mini-ME (the Mini Matchmaking Engine), a mobile inference engine designed from the ground up for the SWoT. It supports Semantic Web technologies and implements both standard (subsumption, satisfiability, classification) and non-standard (abduction, contraction, covering, bonus, difference) inference services for moderately expressive knowledge bases. In addition to an architectural and functional description, usage scenarios and experimental performance evaluation are presented on PC (against other popular Semantic Web reasoners), smartphone and embedded single-board computer testbeds.


Sign in / Sign up

Export Citation Format

Share Document