Research and Implementation of the Secure Database-Update Mechanism

2014 ◽  
Vol 513-517 ◽  
pp. 1752-1755 ◽  
Author(s):  
Chun Liu ◽  
Kun Tan

For a safety critical computer, large-scale data like database which has to be transferred in an instant time cannot be voted directly. This paper proposes a database update algorithm for safety critical computer based on status vote,which is to vote the database status instead of database itself. This algorithm can solve the problem of voting too much data in a short time, and compare versions of database of different modules in real time. A Markov model is built to calculate the safety and reliability of this algorithm. The results show that this algorithm meets the update requirement of safety critical computer. 1. Communication protocol for database update 1.1 TFTP protocol TFTP is a simple protocol for transporting document. It usually uses the UDP protocol to realize but the TFTP does not require the specific agreement of implementation and can implement with TCP in special occasions. [This agreement is designed for small file transferring, so it doesn't have function many FTP usually does; it can only acquire or write the file from the server and not able tot list directory, not authenticate. It transfers 8 bits of data with three models: netascii, the eight-bit ASCII form; octet, the eight-bit source data type; mail, no longer supported, it returns the data back directly to the user rather than saved as a file. 1.2 SRTP Ethernet security real-time data transfer protocol

2021 ◽  
Vol 77 (2) ◽  
pp. 98-108
Author(s):  
R. M. Churchill ◽  
C. S. Chang ◽  
J. Choi ◽  
J. Wong ◽  
S. Klasky ◽  
...  

2014 ◽  
Vol 571-572 ◽  
pp. 497-501 ◽  
Author(s):  
Qi Lv ◽  
Wei Xie

Real-time log analysis on large scale data is important for applications. Specifically, real-time refers to UI latency within 100ms. Therefore, techniques which efficiently support real-time analysis over large log data sets are desired. MongoDB provides well query performance, aggregation frameworks, and distributed architecture which is suitable for real-time data query and massive log analysis. In this paper, a novel implementation approach for an event driven file log analyzer is presented, and performance comparison of query, scan and aggregation operations over MongoDB, HBase and MySQL is analyzed. Our experimental results show that HBase performs best balanced in all operations, while MongoDB provides less than 10ms query speed in some operations which is most suitable for real-time applications.


Author(s):  
Sepehr Fathizadan ◽  
Feng Ju ◽  
Kyle Rowe ◽  
Alex Fiechter ◽  
Nils Hofmann

Abstract Production efficiency and product quality need to be addressed simultaneously to ensure the reliability of large scale additive manufacturing. Specifically, print surface temperature plays a critical role in determining the quality characteristics of the product. Moreover, heat transfer via conduction as a result of spatial correlation between locations on the surface of large and complex geometries necessitates the employment of more robust methodologies to extract and monitor the data. In this paper, we propose a framework for real-time data extraction from thermal images as well as a novel method for controlling layer time during the printing process. A FLIR™ thermal camera captures and stores the stream of images from the print surface temperature while the Thermwood Large Scale Additive Manufacturing (LSAM™) machine is printing components. A set of digital image processing tasks were performed to extract the thermal data. Separate regression models based on real-time thermal imaging data are built on each location on the surface to predict the associated temperatures. Subsequently, a control method is proposed to find the best time for printing the next layer given the predictions. Finally, several scenarios based on the cooling dynamics of surface structure were defined and analyzed, and the results were compared to the current fixed layer time policy. It was concluded that the proposed method can significantly increase the efficiency by reducing the overall printing time while preserving the quality.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 240
Author(s):  
S Sujeetha ◽  
Veneesa Ja ◽  
K Vinitha ◽  
R Suvedha

In the existing scenario, a patient has to go to the hospital to take necessary tests, consult a doctor and buy prescribed medicines or use specified healthcare applications. Hence time is wasted at hospitals and in medical shops. In the case of healthcare applications, face to face interaction with the doctor is not available. The downside of the existing scenario can be improved by the Medimate: Ailment diffusion control system with real time large scale data processing. The purpose of medimate is to establish a Tele Conference Medical System that can be used in remote areas. The medimate is configured for better diagnosis and medical treatment for the rural people. The system is installed with Heart Beat Sensor, Temperature Sensor, Ultrasonic Sensor and Load Cell to monitor the patient’s health parameters. The voice instructions are updated for easier access.  The application for enabling video and voice communication with the doctor through Camera and Headphone is installed at both the ends. The doctor examines the patient and prescribes themedicines. The medical dispenser delivers medicine to the patient as per the prescription. The QR code will be generated for each prescription by medimate and that QR code can be used forthe repeated medical conditions in the future. Medical details are updated in the server periodically.  


Author(s):  
Yushi Shen ◽  
Yale Li ◽  
Ling Wu ◽  
Shaofeng Liu ◽  
Qian Wen

Transferring very high quality digital objects over the optical network is critical in many scientific applications, including video streaming/conferencing, remote rendering on tiled display walls, 3D virtual reality, and so on. Current data transfer protocols rely on the User Datagram Protocol (UDP) as well as a variety of compression techniques. However, none of the protocols scale well to the parallel model of transferring large scale graphical data. The existing parallel streaming protocols have limited synchronization mechanisms to synchronize the streams efficiently, and therefore, are prone to slowdowns caused by significant packet loss of just one stream. In this chapter, the authors propose a new parallel streaming protocol that can stream synchronized multiple flows of media content over optical networks through Cross-Stream packet coding, which not only can tolerate random UDP packet losses but can also aim to tolerate unevenly distributed packet loss patterns across multiple streams to achieve a synchronized throughput with reasonable coding overhead. They have simulated the approach, and the results show that the approach can generate steady throughput with fluctuating data streams of different data loss patterns and can transfer data in parallel at a higher speed than multiple independent UDP streams.


Author(s):  
Amir Basirat ◽  
Asad I. Khan ◽  
Heinz W. Schmidt

One of the main challenges for large-scale computer clouds dealing with massive real-time data is in coping with the rate at which unprocessed data is being accumulated. Transforming big data into valuable information requires a fundamental re-think of the way in which future data management models will need to be developed on the Internet. Unlike the existing relational schemes, pattern-matching approaches can analyze data in similar ways to which our brain links information. Such interactions when implemented in voluminous data clouds can assist in finding overarching relations in complex and highly distributed data sets. In this chapter, a different perspective of data recognition is considered. Rather than looking at conventional approaches, such as statistical computations and deterministic learning schemes, this chapter focuses on distributed processing approach for scalable data recognition and processing.


Processes ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 649
Author(s):  
Yifeng Liu ◽  
Wei Zhang ◽  
Wenhao Du

Deep learning based on a large number of high-quality data plays an important role in many industries. However, deep learning is hard to directly embed in the real-time system, because the data accumulation of the system depends on real-time acquisitions. However, the analysis tasks of such systems need to be carried out in real time, which makes it impossible to complete the analysis tasks by accumulating data for a long time. In order to solve the problems of high-quality data accumulation, high timeliness of the data analysis, and difficulty in embedding deep-learning algorithms directly in real-time systems, this paper proposes a new progressive deep-learning framework and conducts experiments on image recognition. The experimental results show that the proposed framework is effective and performs well and can reach a conclusion similar to the deep-learning framework based on large-scale data.


2012 ◽  
pp. 235-257
Author(s):  
Christopher Oehmen ◽  
Scott Dowson ◽  
Wes Hatley ◽  
Justin Almquist ◽  
Bobbie-Jo Webb-Robertson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document