scholarly journals Selected problems of designing modern industrial databases

2018 ◽  
Vol 183 ◽  
pp. 01017 ◽  
Author(s):  
Dariusz Karpisz ◽  
Anna Kiełbus

The paper presents problems of designing databases for various branches of industry. The development of information technologies and in particular of object-oriented programming has caused a change from data modelling to the modelling of applications. The increase of unstructured Big Data in Industry 4.0 era and requirements of sharing data model between many applications needs a reversion to data analysis and design and it is presented in the article.

Author(s):  
Peter McBrien

Data held in information systems is modelled using a variety of languages, where the choice of language may be decided by functional concerns as well as non-technical concerns. This chapter focuses on data modelling languages, and the challenges faced in mapping schemas in one data modelling language into another data modelling language. We review the ER, relational and UML modelling languages (the later being representative of object oriented programming languages), highlighting aspects of each modelling language that are not representable in the others. We describe how a nested hypergraph data model may be used as an underlying representation of data models, and hence present the differences between the modelling languages in a more precise manner. Finally, we propose a platform for the future building of an automated procedure for translating schemas from one modelling language to another.


2020 ◽  
Vol 26 (4) ◽  
pp. 190-194
Author(s):  
Jacek Pietraszek ◽  
Norbert Radek ◽  
Andrii V. Goroshko

AbstractThe introduction of solutions conventionally called Industry 4.0 to the industry resulted in the need to make many changes in the traditional procedures of industrial data analysis based on the DOE (Design of Experiments) methodology. The increase in the number of controlled and observed factors considered, the intensity of the data stream and the size of the analyzed datasets revealed the shortcomings of the existing procedures. Modifying procedures by adapting Big Data solutions and data-driven methods is becoming an increasingly pressing need. The article presents the current methods of DOE, considers the existing problems caused by the introduction of mass automation and data integration under Industry 4.0, and indicates the most promising areas in which to look for possible problem solutions.


2020 ◽  
Vol 14 (1) ◽  
pp. 57-63
Author(s):  
Andrés Armando Sánchez Martin ◽  
Luis Eduardo Barreto Santamaría ◽  
Juan José Ochoa Ortiz ◽  
Sebastián Enrique Villanueva Navarro

One of the difficulties for the development and testing of data analysis applications used by IoT devices is the economic and temporary cost of building the IoT network, to mitigate these costs and expedite the development of IoT and analytical applications, it is proposed NIOTE, an IoT network emulator that generates sensor and actuator data from different devices that are easy to configure and deploy over TCP/IP and MQTT protocols, this tool serves as support in academic environments and conceptual validation in the design of IoT networks. The emulator facilitates the development of this type of application, optimizing the development time and improving the final quality of the product. Object-oriented programming concepts, architecture, and software design patterns are used to develop this emulator, which allows us to emulate the behavior of IoT devices that are inside a specific network, where you can add the number of necessary devices, model and design any network. Each network sends data that is stored locally to emulate the process of transporting the data to a platform, through a specific format and will be sent to perform Data Analysis.


Author(s):  
JASON T. L. WANG ◽  
PETER A. NG

This paper presents the design of an intelligent document processing system, called TEXPROS. The system is a combination of filing and retrieval systems, which supports storing, extracting, classifying, categorizing, retrieving and browsing information from a variety of documents. TEXPROS is built based on object-oriented programming and rule-based specification techniques. In this paper, we describe main design goals of the system, its data model, logical file structure, and strategies for document classification and categorization. We also illustrate various retrieval methods and query processing techniques through examples. Finally applications of TEXPROS are presented, where we suggest ways in which the use of the system may alter the software process model.


Author(s):  
Vidadi Akhundov Vidadi Akhundov

In this study, attention is drawn to the under-explored area of strategic content analysis and the development of strategic vision for managers, with the supporting role of interpreting visualized big data to apply appropriate knowledge management strategies in regional companies. The study suggests improved models that can be used to process data and apply solutions to Big Data. The paper proposes a model of business processes in the region in the context of information clusters, which become the object of analysis in the conditions of active accumulation of big data about the external and internal environment. Research has shown that traditional econometric and data collection techniques cannot be directly applied to Big Data analysis due to computational volatility or computational complexity. The paper provides a brief description of the essence of the methods of associative and causal data analysis and the problems that complicate its application in Big Data. The scheme of accelerated search for a set of causal relationships is described. The use of semantically structured models, cause-effect models and the K-clustering method for decision making in big data is practical and ensures the adequacy of the results. The article explains the stages of applying these models in practice. In the course of the study, content analysis was carried out using the main methods of processing structured data on the example of the countries of the world using synthetic indicators showing the trends of Industry 4.0. When assessing Industry 4.0 technologies by region, the diversity of country grouping attributes should be considered. Therefore, during the analysis, the countries of the world were compared in two groups. The first group - the results for developed countries are presented in tabular form. For the second group, the results are presented in an explanatory form. In the process of assessing industrial 4.0 technologies, statistical indicators were used: "The share of medium and high-tech activities", "Competitiveness indicators", "Results in the field of knowledge and technology", "The share of medium and high-tech production in the total value added in the manufacturing industry", “Industrial Competitiveness Index (CIP score)”. As a result, the rating of the countries was determined based on the analysis of these indicators. . The reasons for the difficulties of calculations when processing Big Data are given in the concluding part of the article. Keywords: K - clustering method, causal links, data point, Euclidean distance


2016 ◽  
Vol 12 (30) ◽  
pp. 40
Author(s):  
Rais Aziz Ahmad

Software requirements are one of the root causes of failure for IT software development projects. Reasons for this may be that the requirements are high-level, many might simply be wishes, or frequently changed, or they might be unclear, missing, for example, goals, objectives, strategies, and so on. Another major reason for projects’ failure may also be the use of improper techniques for software requirements specification. Currently, most IT software development projects utilise textual techniques like use cases, user stories, scenarios, and features for software requirements elicitation, analysis and specification. While IT software development processes can construct software in different programming languages, the primary focus here will be those IT projects using object-oriented programming languages. Object-oriented programming itself has several characteristics worth noting, such as its popularity, reusability, modularity, concurrency, abstraction and encapsulation. Object-oriented analysis and design transforms software requirements gathered with textual techniques into object-oriented programming. This transformation can cause complexity in identifying objects, classes and interfaces, which, in turn, complicates the object-oriented analysis and design. Because requirements can change over the course of a project and, likewise, software design can evolve during software construction, the traceability of software requirements with objects and components can become difficult. Apart from leading to project complexity, such a process can impact software quality and, in the worst-case scenario, cause the project to fail entirely. The goal of this article is to provide interface-driven techniques that will reduce ambiguity among software requirements, improve traceability and simplify software requirements modelling.


Sign in / Sign up

Export Citation Format

Share Document