FAST TRANSPARENT VIRTUAL MEMORY FOR COMPLEX DATA PROCESSING IN SENSOR NETWORKS

2016 ◽  
Vol 2016 ◽  
pp. 1-14 ◽  
Author(s):  
Glauco Feltrin ◽  
Nemanja Popovic ◽  
Kallirroi Flouri ◽  
Piotr Pietrzak

Wireless sensor networks have been shown to be a cost-effective monitoring tool for many applications on civil structures. Strain cycle monitoring for fatigue life assessment of railway bridges, however, is still a challenge since it is data intensive and requires a reliable operation for several weeks or months. In addition, sensing with electrical resistance strain gauges is expensive in terms of energy consumption. The induced reduction of battery lifetime of sensor nodes increases the maintenance costs and reduces the competitiveness of wireless sensor networks. To overcome this drawback, a signal conditioning hardware was designed that is able to significantly reduce the energy consumption. Furthermore, the communication overhead is reduced to a sustainable level by using an embedded data processing algorithm that extracts the strain cycles from the raw data. Finally, a simple software triggering mechanism that identifies events enabled the discrimination of useful measurements from idle data, thus increasing the efficiency of data processing. The wireless monitoring system was tested on a railway bridge for two weeks. The monitoring system demonstrated a good reliability and provided high quality data.


Author(s):  
Abou_el_ela Abdou Hussein

Day by day advanced web technologies have led to tremendous growth amount of daily data generated volumes. This mountain of huge and spread data sets leads to phenomenon that called big data which is a collection of massive, heterogeneous, unstructured, enormous and complex data sets. Big Data life cycle could be represented as, Collecting (capture), storing, distribute, manipulating, interpreting, analyzing, investigate and visualizing big data. Traditional techniques as Relational Database Management System (RDBMS) couldn’t handle big data because it has its own limitations, so Advancement in computing architecture is required to handle both the data storage requisites and the weighty processing needed to analyze huge volumes and variety of data economically. There are many technologies manipulating a big data, one of them is hadoop. Hadoop could be understand as an open source spread data processing that is one of the prominent and well known solutions to overcome handling big data problem. Apache Hadoop was based on Google File System and Map Reduce programming paradigm. Through this paper we dived to search for all big data characteristics starting from first three V's that have been extended during time through researches to be more than fifty six V's and making comparisons between researchers to reach to best representation and the precise clarification of all big data V’s characteristics. We highlight the challenges that face big data processing and how to overcome these challenges using Hadoop and its use in processing big data sets as a solution for resolving various problems in a distributed cloud based environment. This paper mainly focuses on different components of hadoop like Hive, Pig, and Hbase, etc. Also we institutes absolute description of Hadoop Pros and cons and improvements to face hadoop problems by choosing proposed Cost-efficient Scheduler Algorithm for heterogeneous Hadoop system.


Author(s):  
Dan Pescaru ◽  
Daniel-Ioan Curiac

This chapter presents the main challenges in developing complex systems built around the core concept of Video-Based Wireless Sensor Networks. It summarizes some innovative solutions proposed in scientific literature on this field. Besides discussion on various issues related to such systems, the authors focus on two crucial aspects: video data processing and data exchange. A special attention is paid to localization algorithms in case of random deployment of nodes having no specific localization hardware installed. Solutions for data exchange are presented by highlighting the data compression and communication efficiency in terms of energy saving. In the end, some open research topics related with Video-Based Wireless Sensor Networks are identified and explained.


Diagnostics ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 1052
Author(s):  
Petr G. Lokhov ◽  
Oxana P. Trifonova ◽  
Dmitry L. Maslov ◽  
Elena E. Balashova

In metabolomics, mass spectrometry is used to detect a large number of low-molecular substances in a single analysis. Such a capacity could have direct application in disease diagnostics. However, it is challenging because of the analysis complexity, and the search for a way to simplify it while maintaining the diagnostic capability is an urgent task. It has been proposed to use the metabolomic signature without complex data processing (mass peak detection, alignment, normalization, and identification of substances, as well as any complex statistical analysis) to make the analysis more simple and rapid. Methods: A label-free approach was implemented in the metabolomic signature, which makes the measurement of the actual or conditional concentrations unnecessary, uses only mass peak relations, and minimizes mass spectra processing. The approach was tested on the diagnosis of impaired glucose tolerance (IGT). Results: The label-free metabolic signature demonstrated a diagnostic accuracy for IGT equal to 88% (specificity 85%, sensitivity 90%, and area under receiver operating characteristic curve (AUC) of 0.91), which is considered to be a good quality for diagnostics. Conclusions: It is possible to compile label-free signatures for diseases that allow for diagnosing the disease in situ, i.e., right at the mass spectrometer without complex data processing. This achievement makes all mass spectrometers potentially versatile diagnostic devices and accelerates the introduction of metabolomics into medicine.


Sign in / Sign up

Export Citation Format

Share Document