real time data processing
Recently Published Documents


TOTAL DOCUMENTS

215
(FIVE YEARS 65)

H-INDEX

15
(FIVE YEARS 3)

Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3251
Author(s):  
Sergei V. Shalagin

For the most extensive range of tasks, such as real-time data processing in intelligent transport systems, etc., advanced computer-based techniques are required. They include field-programmable gate arrays (FPGAs). This paper proposes a method of pre-calculating the hardware complexity of computing a group of polynomial functions depending on the number of input variables of the said functions, based on the microchips of FPGAs. These assessments are reduced for a group of polynomial functions due to computing the common values of elementary polynomials. Implementation is performed using similar software IP-cores adapted to the architecture of user-programmable logic arrays. The architecture of FPGAs includes lookup tables and D flip-flops. This circumstance ensures that the pipelined data processing provides the highest operating speed of a device, which implements the group of polynomial functions defined over a Galois field, independently of the number of variables of the said functions. A group of polynomial functions is computed based on common variables. Therefore, the input/output blocks of FPGAs are not a significant limiting factor for the hardware complexity estimates. Estimates obtained in using the method proposed allow evaluating the amount of the reconfigurable resources of FPGAs, required for implementing a group of polynomial functions defined over a Galois field. This refers to both the existing FPGAs and promising ones that have not yet been implemented.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8060
Author(s):  
Carlos González-Sánchez ◽  
Guillermo Sánchez-Brizuela ◽  
Ana Cisnal ◽  
Juan-Carlos Fraile ◽  
Javier Pérez-Turiel ◽  
...  

In this study, new low-cost neck-mounted sensorized wearable device is presented to help farmers detect the onset of calving in extensive livestock farming by continuously monitoring cow data. The device incorporates three sensors: an inertial measurement unit (IMU), a global navigation satellite system (GNSS) receiver, and a thermometer. The hypothesis of this study was that onset calving is detectable through the analyses of the number of transitions between lying and standing of the animal (lying bouts). A new algorithm was developed to detect calving, analysing the frequency and duration of lying and standing postures. An important novelty is that the proposed algorithm has been designed with the aim of being executed in the embedded microcontroller housed in the cow’s collar and, therefore, it requires minimal computational resources while allowing for real time data processing. In this preliminary study, six cows were monitored during different stages of gestation (before, during, and after calving), both with the sensorized wearable device and by human observers. It was carried out on an extensive livestock farm in Salamanca (Spain), during the period from August 2020 to July 2021. The preliminary results obtained indicate that lying-standing animal states and transitions may be useful to predict calving. Further research, with data obtained in future calving of cows, is required to refine the algorithm.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Xianwei Li ◽  
Baoliu Ye

With the development of Internet of Things, massive computation-intensive tasks are generated by mobile devices whose limited computing and storage capacity lead to poor quality of services. Edge computing, as an effective computing paradigm, was proposed for efficient and real-time data processing by providing computing resources at the edge of the network. The deployment of 5G promises to speed up data transmission but also further increases the tasks to be offloaded. However, how to transfer the data or tasks to the edge servers in 5G for processing with high response efficiency remains a challenge. In this paper, a latency-aware computation offloading method in 5G networks is proposed. Firstly, the latency and energy consumption models of edge computation offloading in 5G are defined. Then the fine-grained computation offloading method is employed to reduce the overall completion time of the tasks. The approach is further extended to solve the multiuser computation offloading problem. To verify the effectiveness of the proposed method, extensive simulation experiments are conducted. The results show that the proposed offloading method can effectively reduce the execution latency of the tasks.


2021 ◽  
Vol 54 (5) ◽  
Author(s):  
Marjan Hadian-Jazi ◽  
Alireza Sadri ◽  
Anton Barty ◽  
Oleksandr Yefanov ◽  
Marina Galchenkova ◽  
...  

A peak-finding algorithm for serial crystallography (SX) data analysis based on the principle of `robust statistics' has been developed. Methods which are statistically robust are generally more insensitive to any departures from model assumptions and are particularly effective when analysing mixtures of probability distributions. For example, these methods enable the discretization of data into a group comprising inliers (i.e. the background noise) and another group comprising outliers (i.e. Bragg peaks). Our robust statistics algorithm has two key advantages, which are demonstrated through testing using multiple SX data sets. First, it is relatively insensitive to the exact value of the input parameters and hence requires minimal optimization. This is critical for the algorithm to be able to run unsupervised, allowing for automated selection or `vetoing' of SX diffraction data. Secondly, the processing of individual diffraction patterns can be easily parallelized. This means that it can analyse data from multiple detector modules simultaneously, making it ideally suited to real-time data processing. These characteristics mean that the robust peak finder (RPF) algorithm will be particularly beneficial for the new class of MHz X-ray free-electron laser sources, which generate large amounts of data in a short period of time.


Author(s):  
Sabrine Khriji ◽  
Yahia Benbelgacem ◽  
Rym Chéour ◽  
Dhouha El Houssaini ◽  
Olfa Kanoun

AbstractThe growth of the Internet of Things (IoTs) and the number of connected devices is driven by emerging applications and business models. One common aim is to provide systems able to synchronize these devices, handle the big amount of daily generated data and meet business demands. This paper proposes a cost-effective cloud-based architecture using an event-driven backbone to process many applications’ data in real-time, called REDA. It supports the Amazon Web Service (AWS) IoT core, and it opens the door as a free software-based implementation. Measured data from several wireless sensor nodes are transmitted to the cloud running application through the lightweight publisher/subscriber messaging transport protocol, MQTT. The real-time stream processing platform, Apache Kafka, is used as a message broker to receive data from the producer and forward it to the correspondent consumer. Micro-services design patterns, as an event consumer, are implemented with Java spring and managed with Apache Maven to avoid the monolithic applications’ problem. The Apache Kafka cluster co-located with Zookeeper is deployed over three availability zones and optimized for high throughput and low latency. To guarantee no message loss and to simulate the system performances, different load tests are carried out. The proposed architecture is reliable in stress cases and can handle records goes to 8000 messages in a second with low latency in a cheap hosted and configured architecture.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Jianguo Sun ◽  
Yang Yang ◽  
Zechao Liu ◽  
Yuqing Qiao

Currently, the Internet of Things (IoT) provides individuals with real-time data processing and efficient data transmission services, relying on extensive edge infrastructures. However, those infrastructures may disclose sensitive information of consumers without authorization, which makes data access control to be widely researched. Ciphertext-policy attribute-based encryption (CP-ABE) is regarded as an effective cryptography tool for providing users with a fine-grained access policy. In prior ABE schemes, the attribute universe is only managed by a single trusted central authority (CA), which leads to a reduction in security and efficiency. In addition, all attributes are considered equally important in the access policy. Consequently, the access policy cannot be expressed flexibly. In this paper, we propose two schemes with a new form of encryption named multi-authority criteria-based encryption (CE) scheme. In this context, the schemes express each criterion as a polynomial and have a weight on it. Unlike ABE schemes, the decryption will succeed if and only if a user satisfies the access policy and the weight exceeds the threshold. The proposed schemes are proved to be secure under the decisional bilinear Diffie–Hellman exponent assumption (q-BDHE) in the standard model. Finally, we provide an implementation of our works, and the simulation results indicate that our schemes are highly efficient.


2021 ◽  
Vol 21 (4) ◽  
pp. 1-20
Author(s):  
Zhihan Lv ◽  
Liang Qiao ◽  
Sahil Verma ◽  
Kavita

As deep learning, virtual reality, and other technologies become mature, real-time data processing applications running on intelligent terminals are emerging endlessly; meanwhile, edge computing has developed rapidly and has become a popular research direction in the field of distributed computing. Edge computing network is a network computing environment composed of multi-edge computing nodes and data centers. First, the edge computing framework and key technologies are analyzed to improve the performance of real-time data processing applications. In the system scenario where the collaborative deployment tasks of multi-edge nodes and data centers are considered, the stream processing task deployment process is formally described, and an efficient multi-edge node-computing center collaborative task deployment algorithm is proposed, which solves the problem of copy-free task deployment in the task deployment problem. Furthermore, a heterogeneous edge collaborative storage mechanism with tight coupling of computing and data is proposed, which solves the contradiction between the limited computing and storage capabilities of data and intelligent terminals, thereby improving the performance of data processing applications. Here, a Feasible Solution (FS) algorithm is designed to solve the problem of placing copy-free data processing tasks in the system. The FS algorithm has excellent results once considering the overall coordination. Under light load, the V value is reduced by 73% compared to the Only Data Center-available (ODC) algorithm and 41% compared to the Hash algorithm. Under heavy load, the V value is reduced by 66% compared to the ODC algorithm and 35% compared to the Hash algorithm. The algorithm has achieved good results after considering the overall coordination and cooperation and can more effectively use the bandwidth of edge nodes to transmit and process data stream, so that more tasks can be deployed in edge computing nodes, thereby saving time for data transmission to the data centers. The end-to-end collaborative real-time data processing task scheduling mechanism proposed here can effectively avoid the disadvantages of long waiting times and unable to obtain the required data, which significantly improves the success rate of the task and thus ensures the performance of real-time data processing.


Sign in / Sign up

Export Citation Format

Share Document