scholarly journals Visualizing Effects of COVID-19 Social Isolation with Residential Activity Big Data Sensor Data

Author(s):  
Anuradha Rajkumar ◽  
Bruce Wallace ◽  
Laura Ault ◽  
Julien Lariviere-Chartier ◽  
Frank Knoefel ◽  
...  
2017 ◽  
Vol 8 (2) ◽  
pp. 88-105 ◽  
Author(s):  
Gunasekaran Manogaran ◽  
Daphne Lopez

Ambient intelligence is an emerging platform that provides advances in sensors and sensor networks, pervasive computing, and artificial intelligence to capture the real time climate data. This result continuously generates several exabytes of unstructured sensor data and so it is often called big climate data. Nowadays, researchers are trying to use big climate data to monitor and predict the climate change and possible diseases. Traditional data processing techniques and tools are not capable of handling such huge amount of climate data. Hence, there is a need to develop advanced big data architecture for processing the real time climate data. The purpose of this paper is to propose a big data based surveillance system that analyzes spatial climate big data and performs continuous monitoring of correlation between climate change and Dengue. Proposed disease surveillance system has been implemented with the help of Apache Hadoop MapReduce and its supporting tools.


Author(s):  
Joaquin Vanschoren ◽  
Ugo Vespier ◽  
Shengfa Miao ◽  
Marvin Meeng ◽  
Ricardo Cachucho ◽  
...  

Sensors are increasingly being used to monitor the world around us. They measure movements of structures such as bridges, windmills, and plane wings, human’s vital signs, atmospheric conditions, and fluctuations in power and water networks. In many cases, this results in large networks with different types of sensors, generating impressive amounts of data. As the volume and complexity of data increases, their effective use becomes more challenging, and novel solutions are needed both on a technical as well as a scientific level. Founded on several real-world applications, this chapter discusses the challenges involved in large-scale sensor data analysis and describes practical solutions to address them. Due to the sheer size of the data and the large amount of computation involved, these are clearly “Big Data” applications.


2019 ◽  
Vol 9 (15) ◽  
pp. 3065 ◽  
Author(s):  
Dresp-Langley ◽  
Ekseth ◽  
Fesl ◽  
Gohshi ◽  
Kurz ◽  
...  

Detecting quality in large unstructured datasets requires capacities far beyond the limits of human perception and communicability and, as a result, there is an emerging trend towards increasingly complex analytic solutions in data science to cope with this problem. This new trend towards analytic complexity represents a severe challenge for the principle of parsimony (Occam’s razor) in science. This review article combines insight from various domains such as physics, computational science, data engineering, and cognitive science to review the specific properties of big data. Problems for detecting data quality without losing the principle of parsimony are then highlighted on the basis of specific examples. Computational building block approaches for data clustering can help to deal with large unstructured datasets in minimized computation time, and meaning can be extracted rapidly from large sets of unstructured image or video data parsimoniously through relatively simple unsupervised machine learning algorithms. Why we still massively lack in expertise for exploiting big data wisely to extract relevant information for specific tasks, recognize patterns and generate new information, or simply store and further process large amounts of sensor data is then reviewed, and examples illustrating why we need subjective views and pragmatic methods to analyze big data contents are brought forward. The review concludes on how cultural differences between East and West are likely to affect the course of big data analytics, and the development of increasingly autonomous artificial intelligence (AI) aimed at coping with the big data deluge in the near future.


Author(s):  
Jayashree K. ◽  
Chithambaramani R.

Big data has become a chief strength of innovation across academics, governments, and corporates. Big data comprises massive sensor data, raw and semi-structured log data of IT industries, and the exploded quantity of data from social media. Big data needs big storage, and this volume makes operations such as analytical operations, process operations, retrieval operations very difficult and time consuming. One way to overcome these difficult problems is to have big data clustered in a compact format. Thus, this chapter discusses the background of big data and clustering. It also discusses the various application of big data in detail. The various related work, research challenges of big data, and the future direction are addressed in this chapter.


2015 ◽  
Vol 30 (1) ◽  
pp. 70-74 ◽  
Author(s):  
Jannis Kallinikos ◽  
Ioanna D Constantiou

We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper's commentators. We initially deal with the issue of social data and the role it plays in the current data revolution. The massive involvement of lay publics as instrumented by social media breaks with the strong expert cultures that have underlain the production and use of data in modern organizations. It also sets apart the interactive and communicative processes by which social data is produced from sensor data and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2706 ◽  
Author(s):  
Miao Gao ◽  
Guo-You Shi

Large volumes of automatic identification system (AIS) data provide new ideas and methods for ship data mining and navigation behavior pattern analysis. However, large volumes of big data have low unit values, resulting in the need for large-scale computing, storage, and display. Learning efficiency is low and learning direction is blind and untargeted. Therefore, key feature point (KFP) extraction from the ship trajectory plays an important role in fields such as ship navigation behavior analysis and big data mining. In this paper, we propose a ship spatiotemporal KFP online extraction algorithm that is applied to AIS trajectory data. The sliding window algorithm is modified for application to ship navigation angle deviation, position deviation, and the spatiotemporal characteristics of AIS data. Next, in order to facilitate the subsequent use of the algorithm, a recommended threshold range for the corresponding two parameters is discussed. Finally, the performance of the proposed method is compared with that of the Douglas–Peucker (DP) algorithm to assess its feature extraction accuracy and operational efficiency. The results show that the proposed improved sliding window algorithm can be applied to rapidly and easily extract the KFPs from AIS trajectory data. This ability provides significant benefits for ship traffic flow and navigational behavior learning.


2015 ◽  
Vol 2015.7 (0) ◽  
pp. _28pm1-E-2-_28pm1-E-2
Author(s):  
Sana Talmoudi ◽  
Yoshio Takaeda ◽  
Tetsuya Kanada ◽  
Hiroki Kuwano

2018 ◽  
Vol 9 (2) ◽  
pp. 69-79 ◽  
Author(s):  
Klemen Kenda ◽  
Dunja Mladenić

Abstract Background: Internet of Things (IoT), earth observation and big scientific experiments are sources of extensive amounts of sensor big data today. We are faced with large amounts of data with low measurement costs. A standard approach in such cases is a stream mining approach, implying that we look at a particular measurement only once during the real-time processing. This requires the methods to be completely autonomous. In the past, very little attention was given to the most time-consuming part of the data mining process, i.e. data pre-processing. Objectives: In this paper we propose an algorithm for data cleaning, which can be applied to real-world streaming big data. Methods/Approach: We use the short-term prediction method based on the Kalman filter to detect admissible intervals for future measurements. The model can be adapted to the concept drift and is useful for detecting random additive outliers in a sensor data stream. Results: For datasets with low noise, our method has proven to perform better than the method currently commonly used in batch processing scenarios. Our results on higher noise datasets are comparable. Conclusions: We have demonstrated a successful application of the proposed method in real-world scenarios including the groundwater level, server load and smart-grid data


Sign in / Sign up

Export Citation Format

Share Document