database servers
Recently Published Documents


TOTAL DOCUMENTS

67
(FIVE YEARS 11)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 896 (1) ◽  
pp. 012029
Author(s):  
L R Loua ◽  
M A Budihardjo ◽  
S Sudarno

Abstract Water consumption during irrigation has been a much-researched area in agricultural activities, and due to the frugal nature of different practiced irrigation systems, quite a sufficient amount of water is wasted. As a result, intelligent systems have been designed to integrate water-saving techniques and climatic data collection to improve irrigation. An innovative decision-making system was developed that used Ontology to make 50% of the decision while sensor values make the remaining 50%. Collectively, the system bases its decision on a KNN machine learning algorithm for irrigation scheduling. It also uses two different database servers, an edge and an IoT server, along with a GSM module to reduce the burden of the data transmission while also reducing the latency rate. With this method, the sensors could trace and analyze the data within the network using the edge server before transferring it to the IoT server for future watering requirements. The water-saving technique ensured that the crops obtained the required amount of water to ensure crop growth and prevent the soil from reaching its wilting point. Furthermore, the reduced irrigation water also limits the potential runoff events. The results were displayed using an android application.


2021 ◽  
Vol 11 (8) ◽  
pp. 2120-2125
Author(s):  
K. Muthumayil ◽  
R. Karuppathal ◽  
T. Jayasankar ◽  
B. Aruna Devi ◽  
N. B. Prakash ◽  
...  

Today, sensors generate vast amounts of data in different fields such as hospitals, the transport sector, social media, and so on. In hospitals, the use of sensors that are installed in the patient’s body to monitor the pulse rate, heartbeats, head movement, eyes, and other body parts. Every day, these collected data are stored in local data servers and database servers by various sensors that require effective handling of these data. Sensors are primarily used in most of the IoT applications in everyday life from which smart city plays a crucial role. The aim of the work is to address the application of big data in healthcare and life science, including different types of data that involve special attention in processing. This work focuses on the use of large-data analytical techniques to process medical data. A large volume of unstructured cancer database is considered to identify and predict different types of cancer such as breast cancer, lung cancer, blood cancer, and so forth. This research involves the segmentation of thousands of records on cancer forms in a broad cancer database into various segmented databases. Using KNN algorithm this segmentation, classification and prediction will be achieved.


2021 ◽  
Author(s):  
Ivan Vito Ferrari

Background: Garlic (Allium sativum L.) is a common spice with many health benefits, mainly due to its diverse bioactive compounds, (see below) such as organic sulphides, saponins, phenolic compounds, and polysaccharides. Several studies have demonstrated its functions such as anti-inflammatory, antibacterial, and antiviral, antioxidant, cardiovascular protective and anticancer property. In this work we have investigated the main bioactive components of garlic through a bioinformatics approach. Indeed, we are in an era of bioinformatics where we can predict data in the fields of medicine. Approaches with open access in silico tools have revolutionized disease management due to early prediction of the absorption, distribution, metabolism, excretion, and toxicity (ADMET) profiles of the chemically designed and eco-friendly next-generation drugs. Methods: This paper encompasses the fundamental functions of open access in silico prediction tools, as PASS database (Prediction of Activity Spectra for Substances) that it estimates the probable biological activity profiles for compounds. This paper also aims to help support new researchers in the field of drug design and to investigate best bioactive compounds in garlic. Results: Screening through each of pharmacokinetic criteria resulted in identification of Garlic compounds that adhere to all the ADMET properties. Conclusions: It was established an open-access database (PASS database, available bioinformatics tool SwissADME, PreADMET pkCSM database) servers were employed to determine the ADMET (metabolism, distribution, excretion, absorption, and toxicity) attributes of garlic molecules and to enable identification of promising molecules that follow ADMET properties.


Author(s):  
Vijay Vir Singh ◽  
Abdul Kareem Lado Ismail ◽  
Ibrahim Yusuf ◽  
Ameer Hassan Abdullahi

A complex repairable computer-based test (CBT) network system studied in this paper consists of three client computers, a load balancer, two database servers, with the centralized server structured in a series configuration. Subsystem 1 consists of three homogeneous clients arranged in parallel configuration, subsystem 2 comprises a load balancer, subsystem 3 is comprised of two distributed homogeneous database servers in parallel arrangement and subsystem 4 consists of a centralized database server. Through the transition diagram, the first-order differential equations are derived. The model has solved using supplementary variables, with implications of Laplace transforms. Reliability metrics of system effects such as availability, reliability, MTTF, MTTF sensitivity, and the cost function is estimated to see the impact of failure and repair patterns on reliability evaluations. The results of this study indicated that system performance could be improved when the copula repair is employed.


2021 ◽  
pp. 93-101
Author(s):  
D.S. Kucherenko ◽  

The article discusses the problems of managing the information security of the enterprise in a changing business climate. The description, shortcomings and advantages of the existing asS TP security system are given. The specifics of the company's procedural and technological security capabilities have been revealed. The enterprise's IT infrastructure has been identified in the information security and cyber defense format, consisting of three components: the servers of applications that deliver business applications; database servers that contain business data and system administration channels to manage and monitor infrastructure that need to work together as a coherent and coordinated system. Structured architecture, uniting corporate cybersecurity in 11 functional areas covering the technical and operational breadth of enterprise cybersecurity. These functional areas are highlighted on the basis of their relative independence from each other and because they are well consistent with the way staff, experience and responsibilities are shared in the enterprise. This corporate cybersecurity architecture format provides the basis for managing the capabilities that the enterprise provides the tools of audit, forensics, detection, and preventive control. This structure provides consistent management of security capabilities and helps prioritize their deployment, maintenance, and updates over time. It also ensures strict accountability and good alignment of strategy, staff, budget and technology to meet the organization's security needs. The structure is designed to be flexible and scalable regardless of the size of the enterprise. It provides an expandable mechanism for adjusting cyber defense over time in response to changing cyber threats.


Author(s):  
Selvine G. Mathias ◽  
Sebastian Schmied ◽  
Daniel Grossmann

AbstractDatabase management and monitoring is an inseparable part of any industry. A uniform scheme of monitoring relational databases without explicit user access to database servers is not much explored outside the database environment. In this paper, we present an information distribution scheme related to databases using Open Platform Communication Unified Architecture (OPC UA) servers to clients when multiple databases are involved in a factory. The aim is for external, but relevant clients, to be able to monitor this information mesh independent of explicit access to user schemas. A methodology to dispense data from, as well as check changes in databases using SQL queries and events is outlined and implemented using OPC UA servers. The structure can be used as a remote viewing application for multiple databases in one address space of an OPC UA server.


2020 ◽  
Vol 9 (4) ◽  
pp. 260 ◽  
Author(s):  
Tomás Fernández ◽  
José Luis Pérez-García ◽  
José Miguel Gómez-López ◽  
Javier Cardenal ◽  
Julio Calero ◽  
...  

Gully erosion is one of the main processes of soil degradation, representing 50%–90% of total erosion at basin scales. Thus, its precise characterization has received growing attention in recent years. Geomatics techniques, mainly photogrammetry and LiDAR, can support the quantitative analysis of gully development. This paper deals with the application of these techniques using aerial photographs and airborne LiDAR data available from public database servers to identify and quantify gully erosion through a long period (1980–2016) in an area of 7.5 km2 in olive groves. Several historical flights (1980, 1996, 2001, 2005, 2009, 2011, 2013 and 2016) were aligned in a common coordinate reference system with the LiDAR point cloud, and then, digital surface models (DSMs) and orthophotographs were obtained. Next, the analysis of the DSM of differences (DoDs) allowed the identification of gullies, the calculation of the affected areas as well as the estimation of height differences and volumes between models. These analyses result in an average depletion of 0.50 m and volume loss of 85000 m3 in the gully area, with some periods (2009–2011 and 2011–2013) showing rates of 10,000–20,000 m3/year (20–40 t/ha*year). The manual edition of DSMs in order to obtain digital elevation models (DTMs) in a detailed sector has facilitated an analysis of the influence of this operation on the erosion calculations, finding that it is not significant except in gully areas with a very steep shape.


2020 ◽  
Vol 9 (4) ◽  
pp. 249
Author(s):  
Tomáš Pohanka ◽  
Vilém Pechanec

This paper is focused on comparing database replication over spatial data in PostgreSQL and MySQL. Database replication means solving various problems with overloading a single database server with writing and reading queries. There are many replication mechanisms that are able to handle data differently. Criteria for objective comparisons were set for testing and determining the bottleneck of the replication process. The tests were done over the real national vector spatial datasets, namely, ArcCR500, Data200, Natural Earth and Estimated Pedologic-Ecological Unit. HWMonitor Pro was used to monitor the PostgreSQL database, network and system load. Monyog was used to monitor the MySQL activity (data and SQL queries) in real-time. Both database servers were run on computers with the Microsoft Windows operating system. The results from the provided tests of both replication mechanisms led to a better understanding of these mechanisms and allowed informed decisions for future deployment. Graphs and tables include the statistical data and describe the replication mechanisms in specific situations. PostgreSQL with the Slony extension with asynchronous replication synchronized a batch of changes with a high transfer speed and high server load. MySQL with synchronous replication synchronized every change record with low impact on server performance and network bandwidth.


In general case, the database trigger may be quite applicable to signified queries to validate the database requests. Specifically, these may be essential to adopt search mechanisms to identify the query terms. In such cases it may also be required to eradicate ambiguities during updates by checking consistencies, durability. Many database systems support aggregate functions as it may be really linked to statistical analysis of large scale data. Again as per the requirement and schedule, multilevel aggregation may be thought of towards report generation and implementation of join predicates. In case of complexity, direct requests may be accessed to schedule the entire database operations. While optimizing the database queries, alternative query plans may be thought of implementing specific routines to eradicate the duplicity of query terms. It may be quite possible to containerize the query plans linked to several data servers exploring the inter operator parallelism. Also the assemblers linked to the query plans in the servers may steer the process accordingly. Considering the implementation mechanisms of database query plans inside a cloud storage system, the data may be automatically partitioned and replicated. The servers may change dynamically the existing load in response to the query plans. The queries as well as the transactions may be uncommon during optimization process and applications may be communicated following standard activity protocols linked to the database servers. Linking the query terms to the databases, it may also be required to incorporate metadata towards plan execution. Many times transactional database applications linked to relational cloud may have the provision of configuring and accessing the data and may face the challenges like scalability and privacy. To overcome these issues, the tasks may be relocated and rearranged linked to database servers by which better performance may be achieved dealing with complex transactions. Also the aggregation methods or techniques linked to data partitioning may enable the structured queries to yield better performance. In this paper it is intended to obtain query terms along with the threshold values linked to virtual databases.


In today's modern era, the Semantic Web is progressing at a tremendous speed. The Semantic Web services have opened new possibilities through which any one can avail various services. But if these technologies are not properly protected, then their use can put users in danger. Many people have to face difficulty in assessing the security hazards allied with Semantic Web services. Many Web services such as e-commerce stores various types of data on their database servers. These database servers are distributed across the globe. The e-commerce may use the unalike assortments of database for storing different information. Web services such as e-commerce have to handle assorted types of data. Unalike Web services like e-commerce have to handle assorted types of data. A single database program will not be suitable for storing and processing mixed types of data as it will increase the processing overhead and ultimately reduce the performance of the web service. So there is need arises to define an apposite subclass of databases for each document type.This paper insinuates a set of databases for each type of documents stored on an e-commerce platform. An exertion has been made to define a proper set of database programs for a web service such as e-commerce that adequately manages the requirements of a particular type of data. The tryouts have been steered for an e-commerce Web service for weighing the efficacy of the proposed approach. The results shows that if wellstructured database has been used for a Web service such as ecommerce then response can be very quick and chances of packet loss will also be very low. If network attacks such as DoS and man-in-the-middle attack exist within the request, the performance of the Web service severely affected. A network attack may lead to the more chances of packet loss. The paper also delineates the impact of network attacks on web service performance


Sign in / Sign up

Export Citation Format

Share Document