Challenges and Recommendations in Big Data Indexing Strategies

2021 ◽  
Vol 17 (2) ◽  
pp. 22-39
Author(s):  
Mohamed Attia Mohamed ◽  
Manal A. Abdel-Fattah ◽  
Ayman E. Khedr

Index structures are one of the main strategies for effective data access applied for indexing data. According to expansion of data, traditional indexing strategies on big data meets several challenges that lead to weak performance; they haven't abilities to handle the rapid increase of data in terms of accurate retrieval results and processing time. So, it is necessary to substitute traditional index with another efficient index structure called learned index. Learned index goes to use machine learning models to tackle such issues and achieve more enhancements of processing time and accurate results. In this research, the authors discuss different indexing strategies on big data both traditional and learned indexes, demonstrate the main features of them, perform comparison in terms of its performance, and present big data indexing challenges and solutions. Consequently, the research suggests replacing traditional indexes by dynamic index models, which lead to less processing time and more accurate results taking into consideration specification of hardware used.

A sentiment analysis using SNS data can confirm various people’s thoughts. Thus an analysis using SNS can predict social problems and more accurately identify the complex causes of the problem. In addition, big data technology can identify SNS information that is generated in real time, allowing a wide range of people’s opinions to be understood without losing time. It can supplement traditional opinion surveys. The incumbent government mainly uses SNS to promote its policies. However, measures are needed to actively reflect SNS in the process of carrying out the policy. Therefore this paper developed a sentiment classifier that can identify public feelings on SNS about climate change. To that end, based on a dictionary formulated on the theme of climate change, we collected climate change SNS data for learning and tagged seven sentiments. Using training data, the sentiment classifier models were developed using machine learning models. The analysis showed that the Bi-LSTM model had the best performance than shallow models. It showed the highest accuracy (85.10%) in the seven sentiments classified, outperforming traditional machine learning (Naive Bayes and SVM) by approximately 34.53%p, and 7.14%p respectively. These findings substantiate the applicability of the proposed Bi-LSTM-based sentiment classifier to the analysis of sentiments relevant to diverse climate change issues.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Thérence Nibareke ◽  
Jalal Laassiri

Abstract Introduction Nowadays large data volumes are daily generated at a high rate. Data from health system, social network, financial, government, marketing, bank transactions as well as the censors and smart devices are increasing. The tools and models have to be optimized. In this paper we applied and compared Machine Learning algorithms (Linear Regression, Naïve bayes, Decision Tree) to predict diabetes. Further more, we performed analytics on flight delays. The main contribution of this paper is to give an overview of Big Data tools and machine learning models. We highlight some metrics that allow us to choose a more accurate model. We predict diabetes disease using three machine learning models and then compared their performance. Further more we analyzed flight delay and produced a dashboard which can help managers of flight companies to have a 360° view of their flights and take strategic decisions. Case description We applied three Machine Learning algorithms for predicting diabetes and we compared the performance to see what model give the best results. We performed analytics on flights datasets to help decision making and predict flight delays. Discussion and evaluation The experiment shows that the Linear Regression, Naive Bayesian and Decision Tree give the same accuracy (0.766) but Decision Tree outperforms the two other models with the greatest score (1) and the smallest error (0). For the flight delays analytics, the model could show for example the airport that recorded the most flight delays. Conclusions Several tools and machine learning models to deal with big data analytics have been discussed in this paper. We concluded that for the same datasets, we have to carefully choose the model to use in prediction. In our future works, we will test different models in other fields (climate, banking, insurance.).


2021 ◽  
Author(s):  
Andrew McDonald ◽  

Decades of subsurface exploration and characterisation have led to the collation and storage of large volumes of well related data. The amount of data gathered daily continues to grow rapidly as technology and recording methods improve. With the increasing adoption of machine learning techniques in the subsurface domain, it is essential that the quality of the input data is carefully considered when working with these tools. If the input data is of poor quality, the impact on precision and accuracy of the prediction can be significant. Consequently, this can impact key decisions about the future of a well or a field. This study focuses on well log data, which can be highly multi-dimensional, diverse and stored in a variety of file formats. Well log data exhibits key characteristics of Big Data: Volume, Variety, Velocity, Veracity and Value. Well data can include numeric values, text values, waveform data, image arrays, maps, volumes, etc. All of which can be indexed by time or depth in a regular or irregular way. A significant portion of time can be spent gathering data and quality checking it prior to carrying out petrophysical interpretations and applying machine learning models. Well log data can be affected by numerous issues causing a degradation in data quality. These include missing data - ranging from single data points to entire curves; noisy data from tool related issues; borehole washout; processing issues; incorrect environmental corrections; and mislabelled data. Having vast quantities of data does not mean it can all be passed into a machine learning algorithm with the expectation that the resultant prediction is fit for purpose. It is essential that the most important and relevant data is passed into the model through appropriate feature selection techniques. Not only does this improve the quality of the prediction, it also reduces computational time and can provide a better understanding of how the models reach their conclusion. This paper reviews data quality issues typically faced by petrophysicists when working with well log data and deploying machine learning models. First, an overview of machine learning and Big Data is covered in relation to petrophysical applications. Secondly, data quality issues commonly faced with well log data are discussed. Thirdly, methods are suggested on how to deal with data issues prior to modelling. Finally, multiple case studies are discussed covering the impacts of data quality on predictive capability.


2021 ◽  
Vol 185 ◽  
pp. 177-184
Author(s):  
Lirim Ashiku ◽  
Md. Al-Amin ◽  
Sanjay Madria ◽  
Cihan Dagli

2020 ◽  
Vol 12 (20) ◽  
pp. 3423
Author(s):  
Alireza Arabameri ◽  
Sunil Saha ◽  
Kaustuv Mukherjee ◽  
Thomas Blaschke ◽  
Wei Chen ◽  
...  

The uncertainty of flash flood makes them highly difficult to predict through conventional models. The physical hydrologic models of flash flood prediction of any large area is very difficult to compute as it requires lot of data and time. Therefore remote sensing data based models (from statistical to machine learning) have become highly popular due to open data access and lesser prediction times. There is a continuous effort to improve the prediction accuracy of these models through introducing new methods. This study is focused on flash flood modeling through novel hybrid machine learning models, which can improve the prediction accuracy. The hybrid machine learning ensemble approaches that combine the three meta-classifiers (Real AdaBoost, Random Subspace, and MultiBoosting) with J48 (a tree-based algorithm that can be used to evaluate the behavior of the attribute vector for any defined number of instances) were used in the Gorganroud River Basin of Iran to assess flood susceptibility (FS). A total of 426 flood positions as dependent variables and a total of 14 flood conditioning factors (FCFs) as independent variables were used to model the FS. Several threshold-dependent and independent statistical tests were applied to verify the performance and predictive capability of these machine learning models, such as the receiver operating characteristic (ROC) curve of the success rate curve (SRC) and prediction rate curve (PRC), efficiency (E), root-mean square-error (RMSE), and true skill statistics (TSS). The valuation of the FCFs was done using AdaBoost, frequency ratio (FR), and Boosted Regression Tree (BRT) models. In the flooding of the study area, altitude, land use/land cover (LU/LC), distance to stream, normalized differential vegetation index (NDVI), and rainfall played important roles. The Random Subspace J48 (RSJ48) ensemble method with an area under the curve (AUC) of 0.931 (SRC), 0.951 (PRC), E of 0.89, sensitivity of 0.87, and TSS of 0.78, has become the most effective ensemble in predicting the FS. The FR technique also showed good performance and reliability for all models. Map removal sensitivity analysis (MRSA) revealed that the FS maps have the highest sensitivity to elevation. Based on the findings of the validation methods, the FS maps prepared using the machine learning ensemble techniques have high robustness and can be used to advise flood management initiatives in flood-prone areas.


2022 ◽  
Author(s):  
Song Guo ◽  
Zhihao Qu

Discover this multi-disciplinary and insightful work, which integrates machine learning, edge computing, and big data. Presents the basics of training machine learning models, key challenges and issues, as well as comprehensive techniques including edge learning algorithms, and system design issues. Describes architectures, frameworks, and key technologies for learning performance, security, and privacy, as well as incentive issues in training/inference at the network edge. Intended to stimulate fruitful discussions, inspire further research ideas, and inform readers from both academia and industry backgrounds. Essential reading for experienced researchers and developers, or for those who are just entering the field.


Sign in / Sign up

Export Citation Format

Share Document