scholarly journals Demand-Driven Data Acquisition for Large Scale Fleets

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7190
Author(s):  
Philip Matesanz ◽  
Timo Graen ◽  
Andrea Fiege ◽  
Michael Nolting ◽  
Wolfgang Nejdl

Automakers manage vast fleets of connected vehicles and face an ever-increasing demand for their sensor readings. This demand originates from many stakeholders, each potentially requiring different sensors from different vehicles. Currently, this demand remains largely unfulfilled due to a lack of systems that can handle such diverse demands efficiently. Vehicles are usually passive participants in data acquisition, each continuously reading and transmitting the same static set of sensors. However, in a multi-tenant setup with diverse data demands, each vehicle potentially needs to provide different data instead. We present a system that performs such vehicle-specific minimization of data acquisition by mapping individual data demands to individual vehicles. We collect personal data only after prior consent and fulfill the requirements of the GDPR. Non-personal data can be collected by directly addressing individual vehicles. The system consists of a software component natively integrated with a major automaker’s vehicle platform and a cloud platform brokering access to acquired data. Sensor readings are either provided via near real-time streaming or as recorded trip files that provide specific consistency guarantees. A performance evaluation with over 200,000 simulated vehicles has shown that our system can increase server capacity on-demand and process streaming data within 269 ms on average during peak load. The resulting architecture can be used by other automakers or operators of large sensor networks. Native vehicle integration is not mandatory; the architecture can also be used with retrofitted hardware such as OBD readers.

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Xixi Yan ◽  
Guanghui He ◽  
Jinxia Yu ◽  
Yongli Tang ◽  
Mingjie Zhao

In the Internet of Things (IoT) environment, the intelligent devices collect and share large-scale sensitive personal data for a wide range of application. However, the power of storage and computing of IoT devices is limited, so the mass perceived data will be encrypted and transmitted to a cloud platform-interconnected IoT devices. Therefore, the concern how to save the encryption/decryption cost and preserve the privacy of the sensitive data in IoT environment is an issue that deserves research. To mitigate these issues, an offline/online attribute-based encryption scheme that supports partial policy hidden and outsourcing decryption will be proposed. This scheme adopts offline/online attribute-based encryption algorithms; then, the key generation algorithm and encryption algorithm are divided into two stages: offline stage and online stage. Meanwhile, in order to solve the problem of policy disclosure under the cloud platform, the policy hidden is supported, that is, the attribute is divided into the attribute value and the attribute name. For the pairing operation involved in decryption process, a verifiable outsourced decryption is implemented. Our scheme is constructed based on composite bilinear groups, which meets full security under the standard model. Finally, by comparing with other schemes in terms of functionality and computational overhead, it is shown that the proposed scheme is more efficient and applicable to the mobile devices with limited computing and storage functions in the Internet of Things environment.


Author(s):  
Mehdi Bahri ◽  
Eimear O’ Sullivan ◽  
Shunwang Gong ◽  
Feng Liu ◽  
Xiaoming Liu ◽  
...  

AbstractStandard registration algorithms need to be independently applied to each surface to register, following careful pre-processing and hand-tuning. Recently, learning-based approaches have emerged that reduce the registration of new scans to running inference with a previously-trained model. The potential benefits are multifold: inference is typically orders of magnitude faster than solving a new instance of a difficult optimization problem, deep learning models can be made robust to noise and corruption, and the trained model may be re-used for other tasks, e.g. through transfer learning. In this paper, we cast the registration task as a surface-to-surface translation problem, and design a model to reliably capture the latent geometric information directly from raw 3D face scans. We introduce Shape-My-Face (SMF), a powerful encoder-decoder architecture based on an improved point cloud encoder, a novel visual attention mechanism, graph convolutional decoders with skip connections, and a specialized mouth model that we smoothly integrate with the mesh convolutions. Compared to the previous state-of-the-art learning algorithms for non-rigid registration of face scans, SMF only requires the raw data to be rigidly aligned (with scaling) with a pre-defined face template. Additionally, our model provides topologically-sound meshes with minimal supervision, offers faster training time, has orders of magnitude fewer trainable parameters, is more robust to noise, and can generalize to previously unseen datasets. We extensively evaluate the quality of our registrations on diverse data. We demonstrate the robustness and generalizability of our model with in-the-wild face scans across different modalities, sensor types, and resolutions. Finally, we show that, by learning to register scans, SMF produces a hybrid linear and non-linear morphable model. Manipulation of the latent space of SMF allows for shape generation, and morphing applications such as expression transfer in-the-wild. We train SMF on a dataset of human faces comprising 9 large-scale databases on commodity hardware.


2004 ◽  
Vol 49 (7) ◽  
pp. 89-95
Author(s):  
J. Pittock ◽  
R. Holland

More than for any other biome, freshwater biodiversity is increasingly imperiled, particularly due to poor stream flow management and increasing demand for water diversions. The adoption by the world's governments of targets to extend water services to the poor and at the same time to conserve biodiversity increase the need to better direct investments in freshwater management. In this paper WWF draws on examples from its work to identify areas where investment can be focused to assure efficient water use and improve stream flow management, namely:• Prioritize and target those river basins and sub-catchments that are most critical for conservation of freshwater biodiversity to maintain stream flows;• Link strategic field, policy and market interventions at different scales in river basins to maximize the impact of interventions;• Implement the World Commission on Dams guidelines to minimize investment in large scale and costly infrastructure projects;• Apply market mechanisms and incentives for more sustainable production of the world's most water consuming crops;• Enhance statutory river basin management organizations to draw on their regulatory and financial powers;• Implement international agreements, such as the Convention on Wetlands;• Integrate environment and development policies.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Haron M. Abdel-Raziq ◽  
Daniel M. Palmer ◽  
Phoebe A. Koenig ◽  
Alyosha C. Molnar ◽  
Kirstin H. Petersen

AbstractIn digital agriculture, large-scale data acquisition and analysis can improve farm management by allowing growers to constantly monitor the state of a field. Deploying large autonomous robot teams to navigate and monitor cluttered environments, however, is difficult and costly. Here, we present methods that would allow us to leverage managed colonies of honey bees equipped with miniature flight recorders to monitor orchard pollination activity. Tracking honey bee flights can inform estimates of crop pollination, allowing growers to improve yield and resource allocation. Honey bees are adept at maneuvering complex environments and collectively pool information about nectar and pollen sources through thousands of daily flights. Additionally, colonies are present in orchards before and during bloom for many crops, as growers often rent hives to ensure successful pollination. We characterize existing Angle-Sensitive Pixels (ASPs) for use in flight recorders and calculate memory and resolution trade-offs. We further integrate ASP data into a colony foraging simulator and show how large numbers of flights refine system accuracy, using methods from robotic mapping literature. Our results indicate promising potential for such agricultural monitoring, where we leverage the superiority of social insects to sense the physical world, while providing data acquisition on par with explicitly engineered systems.


2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Felix Gille ◽  
Caroline Brall

AbstractPublic trust is paramount for the well functioning of data driven healthcare activities such as digital health interventions, contact tracing or the build-up of electronic health records. As the use of personal data is the common denominator for these healthcare activities, healthcare actors have an interest to ensure privacy and anonymity of the personal data they depend on. Maintaining privacy and anonymity of personal data contribute to the trustworthiness of these healthcare activities and are associated with the public willingness to trust these activities with their personal data. An analysis of online news readership comments about the failed care.data programme in England revealed that parts of the public have a false understanding of anonymity in the context of privacy protection of personal data as used for healthcare management and medical research. Some of those commenting demanded complete anonymity of their data to be willing to trust the process of data collection and analysis. As this demand is impossible to fulfil and trust is built on a false understanding of anonymity, the inability to meet this demand risks undermining public trust. Since public concerns about anonymity and privacy of personal data appear to be increasing, a large-scale information campaign about the limits and possibilities of anonymity with respect to the various uses of personal health data is urgently needed to help the public to make better informed choices about providing personal data.


2018 ◽  
Vol 11 (1) ◽  
pp. 44-47 ◽  
Author(s):  
Daniel Vamos ◽  
Stefan Oniga ◽  
Anca Alexan

Abstract Personal activity tracker are nowadays part of our lives. They silently monitor our movements and can provide valuable information and even important alerts. But usually the user’s data is stored only on the activity tracker device and the processing done is limited by this modest processing power device. Thus it is very important that the user’s data can be stored and processed in the cloud, making the activity tracker an IOT node. This paper proposes a simple IOT gateway solution for a custom user monitoring device.


BMC Genomics ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Xiujin Li ◽  
Hailiang Song ◽  
Zhe Zhang ◽  
Yunmao Huang ◽  
Qin Zhang ◽  
...  

Abstract Background With the emphasis on analysing genotype-by-environment interactions within the framework of genomic selection and genome-wide association analysis, there is an increasing demand for reliable tools that can be used to simulate large-scale genomic data in order to assess related approaches. Results We proposed a theory to simulate large-scale genomic data on genotype-by-environment interactions and added this new function to our developed tool GPOPSIM. Additionally, a simulated threshold trait with large-scale genomic data was also added. The validation of the simulated data indicated that GPOSPIM2.0 is an efficient tool for mimicking the phenotypic data of quantitative traits, threshold traits, and genetically correlated traits with large-scale genomic data while taking genotype-by-environment interactions into account. Conclusions This tool is useful for assessing genotype-by-environment interactions and threshold traits methods.


2021 ◽  
Vol 17 (5) ◽  
pp. e1008977
Author(s):  
Amir Bahmani ◽  
Kyle Ferriter ◽  
Vandhana Krishnan ◽  
Arash Alavi ◽  
Amir Alavi ◽  
...  

Genomic data analysis across multiple cloud platforms is an ongoing challenge, especially when large amounts of data are involved. Here, we present Swarm, a framework for federated computation that promotes minimal data motion and facilitates crosstalk between genomic datasets stored on various cloud platforms. We demonstrate its utility via common inquiries of genomic variants across BigQuery in the Google Cloud Platform (GCP), Athena in the Amazon Web Services (AWS), Apache Presto and MySQL. Compared to single-cloud platforms, the Swarm framework significantly reduced computational costs, run-time delays and risks of security breach and privacy violation.


Sign in / Sign up

Export Citation Format

Share Document