scholarly journals Geospatial Serverless Computing: Architectures, Tools and Future Directions

2020 ◽  
Vol 9 (5) ◽  
pp. 311 ◽  
Author(s):  
Sujit Bebortta ◽  
Saneev Kumar Das ◽  
Meenakshi Kandpal ◽  
Rabindra Kumar Barik ◽  
Harishchandra Dubey

Several real-world applications involve the aggregation of physical features corresponding to different geographic and topographic phenomena. This information plays a crucial role in analyzing and predicting several events. The application areas, which often require a real-time analysis, include traffic flow, forest cover, disease monitoring and so on. Thus, most of the existing systems portray some limitations at various levels of processing and implementation. Some of the most commonly observed factors involve lack of reliability, scalability and exceeding computational costs. In this paper, we address different well-known scalable serverless frameworks i.e., Amazon Web Services (AWS) Lambda, Google Cloud Functions and Microsoft Azure Functions for the management of geospatial big data. We discuss some of the existing approaches that are popularly used in analyzing geospatial big data and indicate their limitations. We report the applicability of our proposed framework in context of Cloud Geographic Information System (GIS) platform. An account of some state-of-the-art technologies and tools relevant to our problem domain are discussed. We also visualize performance of the proposed framework in terms of reliability, scalability, speed and security parameters. Furthermore, we present the map overlay analysis, point-cluster analysis, the generated heatmap and clustering analysis. Some relevant statistical plots are also visualized. In this paper, we consider two application case-studies. The first case study was explored using the Mineral Resources Data System (MRDS) dataset, which refers to worldwide density of mineral resources in a country-wise fashion. The second case study was performed using the Fairfax Forecast Households dataset, which signifies the parcel-level household prediction for 30 consecutive years. The proposed model integrates a serverless framework to reduce timing constraints and it also improves the performance associated to geospatial data processing for high-dimensional hyperspectral data.

2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Tawfiq Hasanin ◽  
Taghi M. Khoshgoftaar ◽  
Joffrey L. Leevy ◽  
Richard A. Bauder

AbstractSevere class imbalance between majority and minority classes in Big Data can bias the predictive performance of Machine Learning algorithms toward the majority (negative) class. Where the minority (positive) class holds greater value than the majority (negative) class and the occurrence of false negatives incurs a greater penalty than false positives, the bias may lead to adverse consequences. Our paper incorporates two case studies, each utilizing three learners, six sampling approaches, two performance metrics, and five sampled distribution ratios, to uniquely investigate the effect of severe class imbalance on Big Data analytics. The learners (Gradient-Boosted Trees, Logistic Regression, Random Forest) were implemented within the Apache Spark framework. The first case study is based on a Medicare fraud detection dataset. The second case study, unlike the first, includes training data from one source (SlowlorisBig Dataset) and test data from a separate source (POST dataset). Results from the Medicare case study are not conclusive regarding the best sampling approach using Area Under the Receiver Operating Characteristic Curve and Geometric Mean performance metrics. However, it should be noted that the Random Undersampling approach performs adequately in the first case study. For the SlowlorisBig case study, Random Undersampling convincingly outperforms the other five sampling approaches (Random Oversampling, Synthetic Minority Over-sampling TEchnique, SMOTE-borderline1 , SMOTE-borderline2 , ADAptive SYNthetic) when measuring performance with Area Under the Receiver Operating Characteristic Curve and Geometric Mean metrics. Based on its classification performance in both case studies, Random Undersampling is the best choice as it results in models with a significantly smaller number of samples, thus reducing computational burden and training time.


Author(s):  
Kathryn M. de Luna

This chapter uses two case studies to explore how historians study language movement and change through comparative historical linguistics. The first case study stands as a short chapter in the larger history of the expansion of Bantu languages across eastern, central, and southern Africa. It focuses on the expansion of proto-Kafue, ca. 950–1250, from a linguistic homeland in the middle Kafue River region to lands beyond the Lukanga swamps to the north and the Zambezi River to the south. This expansion was made possible by a dramatic reconfiguration of ties of kinship. The second case study explores linguistic evidence for ridicule along the Lozi-Botatwe frontier in the mid- to late 19th century. Significantly, the units and scales of language movement and change in precolonial periods rendered visible through comparative historical linguistics bring to our attention alternative approaches to language change and movement in contemporary Africa.


Author(s):  
A.C.C. Coolen ◽  
A. Annibale ◽  
E.S. Roberts

This chapter reviews graph generation techniques in the context of applications. The first case study is power grids, where proposed strategies to prevent blackouts have been tested on tailored random graphs. The second case study is in social networks. Applications of random graphs to social networks are extremely wide ranging – the particular aspect looked at here is modelling the spread of disease on a social network – and how a particular construction based on projecting from a bipartite graph successfully captures some of the clustering observed in real social networks. The third case study is on null models of food webs, discussing the specific constraints relevant to this application, and the topological features which may contribute to the stability of an ecosystem. The final case study is taken from molecular biology, discussing the importance of unbiased graph sampling when considering if motifs are over-represented in a protein–protein interaction network.


Author(s):  
Ashish Singla ◽  
Jyotindra Narayan ◽  
Himanshu Arora

In this paper, an attempt has been made to investigate the potential of redundant manipulators, while tracking trajectories in narrow channels. The behavior of redundant manipulators is important in many challenging applications like under-water welding in narrow tanks, checking the blockage in sewerage pipes, performing a laparoscopy operation etc. To demonstrate this snake-like behavior, redundancy resolution scheme is utilized using two different approaches. The first approach is based on the concept of task priority, where a given task is split and prioritize into several subtasks like singularity avoidance, obstacle avoidance, torque minimization, and position preference over orientation etc. The second approach is based on Adaptive Neuro Fuzzy Inference System (ANFIS), where the training is provided through given datasets and the results are back-propagated using augmentation of neural networks with fuzzy logics. Three case studies are considered in this work to demonstrate the redundancy resolution of serial manipulators. The first case study of 3-link manipulator is attempted with both the approaches, where the objective is to track the desired trajectory while avoiding multiple obstacles. The second case study of 7-link manipulator, tracking trajectory in a narrow channel, is investigated using the concept of task priority. The realistic application of minimum-invasive surgery (MIS) based trajectory tracking is considered as the third case study, which is attempted using ANFIS approach. The 5-link spatial redundant manipulator, also known as a patient-side manipulator being developed at CSIR-CSIO, Chandigarh is used to track the desired surgical cuts. Through the three case studies, it is well demonstrated that both the approaches are giving satisfactory results.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Markus J. Ankenbrand ◽  
Liliia Shainberg ◽  
Michael Hock ◽  
David Lohr ◽  
Laura M. Schreiber

Abstract Background Image segmentation is a common task in medical imaging e.g., for volumetry analysis in cardiac MRI. Artificial neural networks are used to automate this task with performance similar to manual operators. However, this performance is only achieved in the narrow tasks networks are trained on. Performance drops dramatically when data characteristics differ from the training set properties. Moreover, neural networks are commonly considered black boxes, because it is hard to understand how they make decisions and why they fail. Therefore, it is also hard to predict whether they will generalize and work well with new data. Here we present a generic method for segmentation model interpretation. Sensitivity analysis is an approach where model input is modified in a controlled manner and the effect of these modifications on the model output is evaluated. This method yields insights into the sensitivity of the model to these alterations and therefore to the importance of certain features on segmentation performance. Results We present an open-source Python library (misas), that facilitates the use of sensitivity analysis with arbitrary data and models. We show that this method is a suitable approach to answer practical questions regarding use and functionality of segmentation models. We demonstrate this in two case studies on cardiac magnetic resonance imaging. The first case study explores the suitability of a published network for use on a public dataset the network has not been trained on. The second case study demonstrates how sensitivity analysis can be used to evaluate the robustness of a newly trained model. Conclusions Sensitivity analysis is a useful tool for deep learning developers as well as users such as clinicians. It extends their toolbox, enabling and improving interpretability of segmentation models. Enhancing our understanding of neural networks through sensitivity analysis also assists in decision making. Although demonstrated only on cardiac magnetic resonance images this approach and software are much more broadly applicable.


Author(s):  
Sener Dikmese ◽  
Kishor Lamichhane ◽  
Markku Renfors

AbstractCognitive radio (CR) technology with dynamic spectrum management capabilities is widely advocated for utilizing effectively the unused spectrum resources. The main idea behind CR technology is to trigger secondary communications to utilize the unused spectral resources. However, CR technology heavily relies on spectrum sensing techniques which are applied to estimate the presence of primary user (PU) signals. This paper firstly focuses on novel analysis filter bank (AFB) and FFT-based cooperative spectrum sensing (CSS) techniques as conceptually and computationally simplified CSS methods based on subband energies to detect the spectral holes in the interesting part of the radio spectrum. To counteract the practical wireless channel effects, collaborative subband-based approaches of PU signal sensing are studied. CSS has the capability to relax the problems of both hidden nodes and fading multipath channels. FFT- and AFB-based receiver side sensing methods are applied for OFDM waveform and filter bank-based multicarrier (FBMC) waveform, respectively, the latter one as a candidate beyond-OFDM/beyond-5G scheme. Subband energies are then applied for enhanced energy detection (ED)-based CSS methods that are proposed in the context of wideband, multimode sensing. Our first case study focuses on sensing potential spectral gaps close to relatively strong primary users, considering also the effects of spectral regrowth due to power amplifier nonlinearities. The study shows that AFB-based CSS with FBMC waveform is able to improve the performance significantly. Our second case study considers a novel maximum–minimum energy detector (Max–Min ED)-based CSS. The proposed method is expected to effectively overcome the issue of noise uncertainty (NU) with remarkably lower implementation complexity compared to the existing methods. The developed algorithm with reduced complexity, enhanced detection performance, and improved reliability is presented as an attractive solution to counteract the practical wireless channel effects under low SNR. Closed-form analytic expressions are derived for the threshold and false alarm and detection probabilities considering frequency selective scenarios under NU. The validity of the novel expressions is justified through comparisons with respective results from computer simulations.


Sign in / Sign up

Export Citation Format

Share Document