scholarly journals Detection and Localization of Failures in Hybrid Fiber–Coaxial Network Using Big Data Platform

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2906
Author(s):  
Milan Simakovic ◽  
Zoran Cica

Modern HFC (Hybrid Fiber–Coaxial) networks comprise millions of users. It is of great importance for HFC network operators to provide high network access availability to their users. This requirement is becoming even more important given the increasing trend of remote working. Therefore, network failures need to be detected and localized as soon as possible. This is not an easy task given that there is a large number of devices in typical HFC networks. However, the large number of devices also enable HFC network operators to collect enormous amounts of data that can be used for various purposes. Thus, there is also a trend of introducing big data technologies in HFC networks to be able to efficiently cope with the huge amounts of data. In this paper, we propose a novel mechanism for efficient failure detection and localization in HFC networks using a big data platform. The proposed mechanism utilizes the already present big data platform and collected data to add one more feature to big data platform—efficient failure detection and localization. The proposed mechanism has been successfully deployed in a real HFC network that serves more than one million users.

F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 409
Author(s):  
Balázs Bohár ◽  
David Fazekas ◽  
Matthew Madgwick ◽  
Luca Csabai ◽  
Marton Olbei ◽  
...  

In the era of Big Data, data collection underpins biological research more so than ever before. In many cases this can be as time-consuming as the analysis itself, requiring downloading multiple different public databases, with different data structures, and in general, spending days before answering any biological questions. To solve this problem, we introduce an open-source, cloud-based big data platform, called Sherlock (https://earlham-sherlock.github.io/). Sherlock provides a gap-filling way for biologists to store, convert, query, share and generate biology data, while ultimately streamlining bioinformatics data management. The Sherlock platform provides a simple interface to leverage big data technologies, such as Docker and PrestoDB. Sherlock is designed to analyse, process, query and extract the information from extremely complex and large data sets. Furthermore, Sherlock is capable of handling different structured data (interaction, localization, or genomic sequence) from several sources and converting them to a common optimized storage format, for example to the Optimized Row Columnar (ORC). This format facilitates Sherlock’s ability to quickly and easily execute distributed analytical queries on extremely large data files as well as share datasets between teams. The Sherlock platform is freely available on Github, and contains specific loader scripts for structured data sources of genomics, interaction and expression databases. With these loader scripts, users are able to easily and quickly create and work with the specific file formats, such as JavaScript Object Notation (JSON) or ORC. For computational biology and large-scale bioinformatics projects, Sherlock provides an open-source platform empowering data management, data analytics, data integration and collaboration through modern big data technologies.


Author(s):  
C. M. Albrecht ◽  
N. Bobroff ◽  
B. Elmegreen ◽  
M. Freitag ◽  
H. F. Hamann ◽  
...  

Abstract. In this paper we benchmark a previously introduced big data platform that enables the analysis of big data from remote sensing and other geospatial-temporal data. The platform, called IBM PAIRS Geoscope, has been developed by leveraging open source big data technologies (Hadoop/HBase) that are in principle scalable in storage and compute to hundreds of PetaBytes. Currently, PAIRS hosts multiple PetaBytes of curated and geospatial-temporally indexed data. It organizes all data with key-value combinations, performing analytics close to the data to minimize data movement.


Author(s):  
Ying Wang ◽  
Yiding Liu ◽  
Minna Xia

Big data is featured by multiple sources and heterogeneity. Based on the big data platform of Hadoop and spark, a hybrid analysis on forest fire is built in this study. This platform combines the big data analysis and processing technology, and learns from the research results of different technical fields, such as forest fire monitoring. In this system, HDFS of Hadoop is used to store all kinds of data, spark module is used to provide various big data analysis methods, and visualization tools are used to realize the visualization of analysis results, such as Echarts, ArcGIS and unity3d. Finally, an experiment for forest fire point detection is designed so as to corroborate the feasibility and effectiveness, and provide some meaningful guidance for the follow-up research and the establishment of forest fire monitoring and visualized early warning big data platform. However, there are two shortcomings in this experiment: more data types should be selected. At the same time, if the original data can be converted to XML format, the compatibility is better. It is expected that the above problems can be solved in the follow-up research.


Author(s):  
Karima Aslaoui Mokhtari ◽  
Salima Benbernou ◽  
Mourad Ouziri ◽  
Hakim Lahmar ◽  
Muhammad Younas

Sign in / Sign up

Export Citation Format

Share Document