centralized database
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 19)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 3 (3) ◽  
pp. 199-212
Author(s):  
Evgeny Pakulov ◽  
Sergey Ovanesyan

The article examined the role of the State Automated System of the Russian Federation «Elections» in ensuring the active citizen participation in the election process. We analyzed the prerequisites for the reengineering of the State Automated System of the Russian Federation «Elections». The research determined the main approaches to the construction of models of the information system that ensure the electoral process. The authors analyzed the problems that arise when using the centralized database model. The study demonstrated the importance of providing election commissions located in hard-to-reach settlements with reliable communication channels. Solutions are proposed to reduce the negative consequences of the transition to a centralized database model by using local data caching on the web client side. The mechanism of local caching using the Service Worker API is considered. Various scenarios for using the Service Worker in the context of the electoral process, taking into account the category and importance of the data, were studied and demonstrated. The study analyzed the software employed by election commissions — the Sputnik browser for the possibility of using the proposed concept of local caching in it.


Author(s):  
S.M. Gazalieva ◽  
◽  
M.N. Yugay ◽  
N.Y. Ilyushina

Abstract: The paper presents the results of statistical studies of disability due to occupational diseases in the Karaganda region between 2016 and 2020. The analysis of disability due to occupational diseases was conducted using the automated information system called "Centralized Database of Persons with Disabilities" (CDPD). The structure of primary disability by the severity of groups, age and sex, the place of work of the affected workers has been studied.


Author(s):  
Alexander Alekseevich Nedostup ◽  
Alexey Olegovich Razhev

The article highlights the problems of managing trawl fishing, increasing the operation efficiency and reducing the influence of the human factor. There has been considered using a neural network in combination with a mathematical model and BigData technologies for predictive modeling in the process of automatic control of trawl fishing in order to increase its efficiency (to reduce energy and labor costs, to increase fishing productivity). Advantages of the proposed approach are the possibility to account for the above factors neglected in the mathematical model due to the complexity of their mathematical description (e.g. time of the day, time of the year, weather conditions, density, congestion, availability and distribution of food resources, other aquatic species), as well as the possibility of collecting and accumulating data obtained in many fishing operations and from different fishers for their subsequent consideration in the fishery management in the future. There has been proposed a solution based on the corrected output data obtained from a mathematical model and on the output data of a neural network. The weight coefficients of the neural network are extracted from a centralized database using BigData technologies before fishing with a selection criterion for the area and object of fishing. In the course of fishing the input data of the neural network and the final (adjusted) output data of the control are recorded. At the end of fishing the saved data is used in the process of training the neural network, followed by updating the weight coefficients in a centralized database. The neural network learning process occurs between the fisheries on a centralized shared neural network. The adjusted weight coefficients are updated in the general database of fishers.


2020 ◽  
Author(s):  
Halim Tannous ◽  
Shadi Akiki ◽  
Rasha E. Boulos ◽  
Charlene El Khoury Eid ◽  
Ghadi El Hasbani ◽  
...  

Abstract The world has been dealing with the COVID-19 pandemic since December 2019 and a lot of effort has focused on tracking the spread of the virus by gathering information regarding testing statistics and generating viral genomic sequences. Unfortunately, there is neither a single comprehensive resource with global historical testing data nor a centralized database with summary statistics of the identified genomic variants.We merged different pre-aggregated historical testing data and complemented them with our manually extracted ones, which consist of 6852 historical test statistics from 76 countries/states unreported in any other dataset, at the date of submission, making our dataset the most comprehensive to date. We also analyzed all publicly deposited SARS-CoV-2 genomic sequences in GISAID and annotated their variants. Both datasets can be accessed through our interactive dashboard which also provides important insights on different outbreak trends across countries and states.The dashboard is available at https://bioinfo.lau.edu.lb/gkhazen/covid19. A daily updated version of the datasets can be downloaded from github.com/KhazenLab/covid19-data.


2020 ◽  
Vol 41 (S1) ◽  
pp. s431-s432
Author(s):  
Rachael Snyders ◽  
Hilary Babcock ◽  
Christopher Blank

Background: Immunization resistance is fueling a resurgence of vaccine-preventable diseases in the United States, where several large measles outbreaks and 1,282 measles cases were reported in 2019. Concern about these measles outbreaks prompted a large healthcare organization to develop a preparedness plan to limit healthcare-associated transmission. Verification of employee rubeola immunity and immunization when necessary was prioritized because of transmission risk to nonimmune employees and role of the healthcare personnel in responding to measles cases. Methods: The organization employs ∼31,000 people in diverse settings. A multidisciplinary team was formed by infection prevention, infectious diseases, occupational health, and nursing departments to develop the preparedness plan. Immunity was monitored using a centralized database. Employees without evidence of immunity were asked to provide proof of vaccination, defined by the CDC as 2 appropriately timed doses of rubeola-containing vaccine, or laboratory confirmation of immunity. Employees were given 30 days to provide documentation or to obtain a titer at the organization’s expense. Staff with negative titers were given 2 weeks to coordinate with the occupational heath department for vaccination. Requests for medical or religious accommodations were evaluated by occupational heath staff, the occupational heath medical director, and the human resources department. All employees were included, though patient-interfacing employees in departments considered higher risk were prioritized. These areas were the emergency, dermatology, infectious diseases, labor and delivery, obstetrics, and pediatrics departments. Results: At the onset of the initiative in June 2019, 4,009 employees lacked evidence of immunity. As of November 2019, evidence of immunity had been obtained for 3,709 employees (92.5%): serological evidence of immunity was obtained for 2,856 (71.2%), vaccine was administered to 584 (14.6%), and evidence of previous vaccination was provided by 269 (6.7%). Evidence of immunity has not been documented for 300 (7.5%). The organization administered 3,626 serological tests and provided 997 vaccines, costing ∼$132,000. Disposition by serological testing is summarized in Table 1. Conclusions: A measles preparedness strategy should include proactive assessment of employees’ immune status. It is possible to expediently assess a large number of employees using a multidisciplinary team with access to a centralized database. Consideration may be given to prioritization of high-risk departments and patient-interfacing roles to manage workload.Funding: NoneDisclosures: None


2020 ◽  
Vol 1 (3) ◽  
pp. 119-126
Author(s):  
Eka Septiawati ◽  
Siti Sauda

Each company certainly cannot be separated from technology, which is technology that is expected to assist in doing every job, for example in a centralized database where the data can be accessed simultaneously. CV. Cahaya Abadi is a company or construction engaged in the electricity sector to carry out the problem of billing customer electricity arrears in villages in Sekayu District. Electricity Bill arrears in PT. Muba Electric Power is still a very big problem, namely the total bill reachesRp. 30,911,833,490 for 10 Districts consisting of 20,677 subscribers. Due to the mismatch of the usage records recorded by the meter registering officer every month, the amount of the bill appeared and made customers lazy to pay.So, to overcome these problems, namely by creating a database that will be connected to the scope of the parts in the CV. Eternal Light centrally. So that every job gets the desired results with structured performance. The database development method used for this study uses the methodLife Cycle Database(DBLC). Setiap perusahaan tentunya tidak lepas dari yang namanya teknologi, yaitu teknologi yang diharapkan dapat membantu dalam mengerjakan setiap pekerjaan, contohnya dalam satu basis data terpusat dimana data tersebut bisa diakses secara bersamaan. CV. Cahaya Abadi merupakan salah satu perusahaan atau konstruksi yang bergerak di bidang listrik untuk melakukan masalah penagihan tunggakan listrik pelanggan di desa-desa yang ada di Kecamatan Sekayu. Tunggakan Tagihan Listrik yang ada di PT. Muba Electric Power masih menjadi masalah yang sangat besar, yaitu total tagihan mencapai Rp.  30.911.833.490 untuk 10 Kecamatan yang terdiri dari 20.677 pelanggan. Disebabkan oleh ketidaksesuaian catatan pemakaian yang dicatat oleh petugas pencatat angka meter setiap bulannya sehingga muncul besarnya tagihan dan mengakibatkan pelanggan malas membayar. Jadi, untuk mengatasi permasalahan tersebut, yaitu dengan membuat basis data yang nantinya terhubung dengan ruang lingkup bagian-bagian dalam CV. Cahaya Abadi secara terpusat. Sehingga setiap pekerjaan mendapatkan hasil yang diinginkan dengan kinerja yang terstruktur. Adapun metode pengembangan database yang digunakan untuk penelitian ini menggunakan metode Database Life Cycle (DBLC).


2020 ◽  
Vol 2020 ◽  
pp. 1-20
Author(s):  
Tien Pham Van ◽  
Nguyen Pham Van ◽  
Trung Ha Duyen

Increasingly inexpensive unmanned aerial vehicles (UAVs) are helpful for searching and tracking moving objects in ground events. Previous works either have assumed that data about the targets are sufficiently available, or they solely rely on on-board electronics (e.g., camera and radar) to chase them. In a searching mission, path planning is essentially preprogrammed before taking off. Meanwhile, a large-scale wireless sensor network (WSN) is a promising means for monitoring events continuously over immense areas. Due to disadvantageous networking conditions, it is nevertheless hard to maintain a centralized database with sufficient data to instantly estimate target positions. In this paper, we therefore propose an online self-navigation strategy for a UAV-WSN integrated system to supervise moving objects. A UAV on duty exploits data collected on the move from ground sensors together with its own sensing information. The UAV autonomously executes edge processing on the available data to find the best direction toward a target. The designed system eliminates the need of any centralized database (fed continuously by ground sensors) in making navigation decisions. We employ a local bivariate regression to formulate acquired sensor data, which lets the UAV optimally adjust its flying direction, synchronously to reported data and object motion. In addition, we also construct a comprehensive searching and tracking framework in which the UAV flexibly sets its operation mode. As a result, least communication and computation overhead is actually induced. Numerical results obtained from NS-3 and Matlab cosimulations have shown that the designed framework is clearly promising in terms of accuracy and overhead costs.


Author(s):  
Kanchan Pradhan ◽  
Gaokarna Subhanrao Ghule ◽  
Durgesh Rajkumar Yadav ◽  
Snehal Suhas Shinde

Increasing digital technology has revolutionized the life of people. The banking system in today’s world is open to threats of fraud and cyber-attacks. Since todays banking system is built on centralized databases, it is easy for an attacker to penetrate in any such database which will easily compromise all the information and data of the customers of the bank. This vulnerability of today’s banking system can be reduced by re-building the banking systems on top of block chain technology, which will remove the centralized database architecture and decentralize the data over the block chain, thus reducing the threat of database being hacked. Since the transactions over the block chain technology is verified by each and every nodes of the chain, it will make the transactions more and more secure thus making the overall banking system faster and secure.


Author(s):  
Durgesh Rajkumar Yadav ◽  
Gaokarna Subhanrao Ghule ◽  
Snehal Suhas Shinde

Increasing digital technology has revolutionized the life of people. The banking system in today’s world is open to threats of fraud and cyber-attacks. Since todays banking system is built on centralized databases, it is easy for an attacker to penetrate in any such database which will easily compromise all the information and data of the customers of the bank. This vulnerability of today’s banking system can be reduced by re-building the banking systems on top of block chain technology, which will remove the centralized database architecture and decentralize the data over the block chain, thus reducing the threat of database being hacked. Since the transactions over the block chain technology is verified by each and every nodes of the chain, it will make the transactions more secure thus making the overall banking system faster and secure.


Publications ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 6
Author(s):  
Julia Lanoue

Open Access data plays an increasingly important role in discussions of environmental issues. Limited availability or poor quality data can impede citizen participation in environmental dialogue, leading to their voices being undermined. This study assesses the quality of Open Access environmental data and barriers to its accessibility in the Thames Estuary. Data quality is assessed by its ability to track long-term trends in temperature, salinity, turbidity, and dissolved oxygen. The inconsistencies found in the data required analyses and careful interpretation beyond what would be expected of a citizen. The lack of clear documentation and centralized database acted as a major barrier to usability. A set of recommendations are produced for estuarine monitoring, including defining minimum standards for metadata, creating a centralized database for better quality control and accessibility, and developing flexible monitoring protocols that can incorporate new hypotheses and partnerships. The goal of the recommendations is to create monitoring which can encourage better science and wider participation in the natural environment.


Sign in / Sign up

Export Citation Format

Share Document