scholarly journals Comparative analysis of database systems dedicated for Android

2020 ◽  
Vol 15 ◽  
pp. 126-132
Author(s):  
Kamil Wałachowski ◽  
Grzegorz Kozieł

The article presents a comparative analysis of mobile databases dedicated for Android. The comparative analysis was carried out on the example of a relational SQLite database with selected nonrelational databases: Realm, ObjectBox and SnappyDB. Theoretical issues were discussed, such as stored data types. Research was carried out to check the performance of mobile databases in terms of: saving, editing, deleting, searching and sorting data. CPU and RAM usage were examined during saving data. The research also included a comparison the size of the database files on the internal disk.

2011 ◽  
pp. 49-80
Author(s):  
Hans-Peter Kriegel ◽  
Martin Pfeifle ◽  
Marco Potke ◽  
Thomas Seidl ◽  
Jost Enderle

In order to generate efficient execution plans for queries comprising spatial data types and predicates, the database system has to be equipped with appropriate index structures, query processing methods and optimization rules. Although available extensible indexing frameworks provide a gateway for seamless integration of spatial access methods into the standard process of query optimization and execution, they do not facilitate the actual implementation of the spatial access method. An internal enhancement of the database kernel is usually not an option for database developers. The embedding of a custom, block-oriented index structure into concurrency control, recovery services and buffer management would cause extensive implementation efforts and maintenance cost, at the risk of weakening the reliability of the entire system. The server stability can be preserved by delegating index operations to an external process, but this approach induces severe performance bottlenecks due to context switches and inter-process communication. Therefore, we present the paradigm of object-relational spatial access methods that perfectly fits to the common relational data model, and is highly compatible with the extensible indexing frameworks of existing object-relational database systems, allowing the user to define application-specific access methods.


2018 ◽  
Vol 26 (2) ◽  
pp. 246-254 ◽  
Author(s):  
Carsten Q. Schneider

The sole purpose of the enhanced standard analysis (ESA) is to prevent so-called untenable assumptions in Qualitative Comparative Analysis (QCA). One source of such assumptions can be statements of necessity. QCA realists, the majority of QCA researchers, have elaborated a set of criteria for meaningful claims of necessity: empirical consistency, empirical relevance, and conceptual meaningfulness. I show that once Thiem’s (2017) data mining approach to detecting supersets is constrained by adhering to those standards, no CONSOL effect of Schneider and Wagemann’s ESA exists. QCA idealists, challenging most of QCA realists’ conventions, argue that separate searches for necessary conditions are futile because the most parsimonious solution formula reveals the minimally necessary disjunction of minimally sufficient conjunctions. Engaging with this perspective, I address several unresolved empirical and theoretical issues that seem to prevent the QCA idealist position from becoming mainstream.


2020 ◽  
Author(s):  
Yuanhui Yu

Abstract As we all know, embedded systems are becoming more and more widely used. All devices with digital interfaces, such as watches, microwave ovens, video recorders, automobiles, etc., use embedded systems, but most embedded systems are implemented by a single embedded application program to achieve the entire control logic.At present, embedded applications on the WinCE platform are extending towards microservices and miniaturization. More embedded device application data requires small, embedded database systems to organize, store, and manage. The embedded database SQLite has many advantages such as zero-configuration, lightweight, multiple interfaces, easy portability, readable source code, and open source. It is currently widely used in the WinCE embedded operating system. This article discusses the technical characteristics of the SQLite database in detail, SQLite data manipulation, SQLite transplantation under the WinCE platform, and finally implements SQLite data management on WinCE mobile terminal based on MFC programming.


2020 ◽  
Vol 17 ◽  
pp. 373-378
Author(s):  
Arkadiusz Solarz ◽  
Tomasz Szymczyk

This article presents a comparative analysis of four popular database technologies. Commercial Oracle Database and SQL Server systems are compared with open source database management systems: PostgreSQL and MySQL. These systems have been available on the market for over a dozen years. Versions released in 2019 were selected for testing and comparasion. For the purposes of the comparative analysis, a database schema was developed and instantiated. Then, test scenarios were developed. They were prepared on the basis of the most popular operations performed with the use of database systems.


Author(s):  
Palvanov Izzat Turaevich ◽  

This article is aimed at covering foreign experience in the introduction of probation and testing institutions as an alternative in the criminal punishment system, and at the same time to investigate the theoretical issues of this criminal legal relationship. In his research, the author emphasized mainly the issues of the use of probation and probation as an alternative instead of the prison sentence, and gave a comparative analysis of the practice of the Republic of Uzbekistan and foreign countries in this field and substantiated conclusions and significant proposals.


Author(s):  
Markus Schneider

Spatial database systems and geographical information systems are currently only able to support geographical applications that deal with only crisp spatial objects, that is, objects whose extent, shape, and boundary are precisely determined. Examples are land parcels, school districts, and state territories. However, many new, emerging applications are interested in modeling and processing geographic data that are inherently characterized by spatial vagueness or spatial indeterminacy. Examples are air polluted areas, temperature zones, and lakes. These applications require novel concepts due to the lack of adequate approaches and systems. In this chapter, the authors show how soft computing techniques can provide a solution to this problem. They give an overview of two type systems or algebras that can be integrated into database systems and utilized for the modeling and handling of spatial vagueness. The first type system, called Vague Spatial Algebra (VASA), is based on well known, general, and exact models of crisp spatial data types and introduces vague points, vague lines, and vague regions. This enables an exact definition of the vague spatial data model since we can build it upon an already existing theory of spatial data types. The second type system, called Fuzzy Spatial Algebra (FUSA), leverages fuzzy set theory and fuzzy topology and introduces novel fuzzy spatial data types for fuzzy points, fuzzy lines, and fuzzy regions. This enables an even more fine-grained modeling of spatial objects that do not have sharp boundaries and interiors or whose boundaries and interiors cannot be precisely determined. This chapter provides a formal definition of the structure and semantics of both type systems. Further, the authors introduce spatial set operations for both algebras and obtain vague and fuzzy versions of geometric intersection, union, and difference. Finally, they describe how these data types can be embedded into extensible databases and show some example queries.


2020 ◽  
Vol 34 (6) ◽  
pp. 701-708
Author(s):  
Venkat Rayala ◽  
Satyanarayan Reddy Kalli

Clustering emerged as powerful mechanism to analyze the massive data generated by modern applications; the main aim of it is to categorize the data into clusters where objects are grouped into the particular category. However, there are various challenges while clustering the big data recently. Deep Learning has been powerful paradigm for big data analysis, this requires huge number of samples for training the model, which is time consuming and expensive. This can be avoided though fuzzy approach. In this research work, we design and develop an Improvised Fuzzy C-Means (IFCM)which comprises the encoder decoder Convolutional Neural Network (CNN) model and Fuzzy C-means (FCM) technique to enhance the clustering mechanism. Encoder decoder based CNN is used for learning feature and faster computation. In general, FCM, we introduce a function which measure the distance between the cluster center and instance which helps in achieving the better clustering and later we introduce Optimized Encoder Decoder (OED) CNN model for improvising the performance and for faster computation. Further in order to evaluate the proposed mechanism, three distinctive data types namely Modified National Institute of Standards and Technology (MNIST), fashion MNIST and United States Postal Service (USPS) are used, also evaluation is carried out by considering the performance metric like Accuracy, Adjusted Rand Index (ARI) and Normalized Mutual Information (NMI). Moreover, comparative analysis is carried out on each dataset and comparative analysis shows that IFCM outperforms the existing model.


Sign in / Sign up

Export Citation Format

Share Document