scholarly journals New developments on EDR (Event Data Recorder) for automated vehicles

2020 ◽  
Vol 10 (1) ◽  
pp. 140-146
Author(s):  
Klaus Böhm ◽  
Tibor Kubjatko ◽  
Daniel Paula ◽  
Hans-Georg Schweiger

AbstractWith the upcoming new legislative rules in the EU on Event Data Recorder beginning 2022 the question is whether the discussed data base is sufficient for the needs of clarifying accidents involving automated vehicles. Based on the reconstruction of real accidents including vehicles with ADAS combined with specially designed crash tests a broader data base than US EDR regulation (NHTSA 49 CFR Part 563.7) is proposed. The working group AHEAD, to which the authors contribute, has already elaborated a data model that fits the needs of automated driving. The structure of this data model is shown. Moreover, the special benefits of storing internal video or photo feeds form the vehicle camera systems combined with object data is illustrated. When using a sophisticate 3D measurement method of the accident scene the videos or photos can also serve as a control instance for the stored vehicle data. The AHEAD Data Model enhanced with the storage of the video and photo feeds should be considered in the planned roadmap of the Informal Working Group (IWG) on EDR/ DSSAD (Data Storage System for Automated Driving) reporting to UNECE WP29. Also, a data access over the air using technology already applied in China for electric vehicles called Real Time Monitoring would allow a quantum leap in forensic accident reconstruction.

2020 ◽  
Vol 245 ◽  
pp. 04027
Author(s):  
X. Espinal ◽  
S. Jezequel ◽  
M. Schulz ◽  
A. Sciabà ◽  
I. Vukotic ◽  
...  

HL-LHC will confront the WLCG community with enormous data storage, management and access challenges. These are as much technical as economical. In the WLCG-DOMA Access working group, members of the experiments and site managers have explored different models for data access and storage strategies to reduce cost and complexity, taking into account the boundary conditions given by our community.Several of these scenarios have been evaluated quantitatively, such as the Data Lake model and incremental improvements of the current computing model with respect to resource needs, costs and operational complexity.To better understand these models in depth, analysis of traces of current data accesses and simulations of the impact of new concepts have been carried out. In parallel, evaluations of the required technologies took place. These were done in testbed and production environments at small and large scale.We will give an overview of the activities and results of the working group, describe the models and summarise the results of the technology evaluation focusing on the impact of storage consolidation in the form of Data Lakes, where the use of streaming caches has emerged as a successful approach to reduce the impact of latency and bandwidth limitation.We will describe the experience and evaluation of these approaches in different environments and usage scenarios. In addition we will present the results of the analysis and modelling efforts based on data access traces of the experiments.


2013 ◽  
Vol 5 (1) ◽  
pp. 53-69
Author(s):  
Jacques Jorda ◽  
Aurélien Ortiz ◽  
Abdelaziz M’zoughi ◽  
Salam Traboulsi

Grid computing is commonly used for large scale application requiring huge computation capabilities. In such distributed architectures, the data storage on the distributed storage resources must be handled by a dedicated storage system to ensure the required quality of service. In order to simplify the data placement on nodes and to increase the performance of applications, a storage virtualization layer can be used. This layer can be a single parallel filesystem (like GPFS) or a more complex middleware. The latter is preferred as it allows the data placement on the nodes to be tuned to increase both the reliability and the performance of data access. Thus, in such a middleware, a dedicated monitoring system must be used to ensure optimal performance. In this paper, the authors briefly introduce the Visage middleware – a middleware for storage virtualization. They present the most broadly used grid monitoring systems, and explain why they are not adequate for virtualized storage monitoring. The authors then present the architecture of their monitoring system dedicated to storage virtualization. We introduce the workload prediction model used to define the best node for data placement, and show on a simple experiment its accuracy.


Cloud computing, an efficient technology that utilizes huge amount of data file storage with security. However, the content owner does not controlling data access for unauthorized clients and does not control data storage and usage of data. Some previous approaches data access control to help data de-duplication concurrently for cloud storage system. Encrypted data for cloud storage is not effectively handled by current industrial de-duplication solutions. The deduplication is unguarded from brute-force attacks and fails in supporting control of data access .An efficient data confining technique that eliminates redundant data’s multiple copies which is commonly used is Data-Deduplication. It reduces the space needed to store these data and thus bandwidth is saved. An efficient content discovery and preserving De-duplication (ECDPD) algorithm that detects client file range and block range of de-duplication in storing data files in the cloud storage system was proposed to overpower the above problems.Data access control is supported by ECDPD actively. Based on Experimental evaluations, proposed ECDPD method reduces 3.802 milliseconds of DUT (Data Uploading Time) and 3.318 milliseconds of DDT (Data Downloading Time) compared than existing approaches


Author(s):  
Michael Yoseph Ricky

In order to support the decision making process effectively and efficiently, a lot of companies invest in information technology for data management and storage. The information technology is emphasized in data storage of daily transactions in large numbers in a medium, so that data can be easily processed. Online Mobile Repository System is a system that uses a temporary storage system which is practical and can be accessed online. Thus, the system simplifies data access and organization anywhere and anytime. The design of this system uses the methods of literature review, interviews, field studies, and combined design studies. The research results in a data storage system which is organized, safe and structured to support the meeting process. 


1988 ◽  
Vol 129 ◽  
pp. 357-358
Author(s):  
W. E. Himwich

The VLBI group in NASA's Crustal Dynamics Project (CDP) maintains an integrated system for analyzing geodetic VLBI data. This system includes: CALC, calibration programs, SOLVE, GLOBL, and the Data Base System. CALC is the program which contains the models used to calculate the theoretical delay. SOLVE is used to analyze single experiments. GLOBL is used to analyze large groups of experiments. The Data Base System is a self-documenting data storage system used to pass data between programs and archive the data. Kalman filtering is being investigated for operational use in geodetic data analysis.


Author(s):  
Sunil S ◽  
A Ananda Shankar

Cloud storage system is to provides facilitative file storage and sharing services for distributed clients.The cloud storage preserve the privacy of data holders by proposing a scheme to manage encrypted data storage with deduplication. This process can flexibly support data sharing with deduplication even when the data holder is offline, and it does not intrude the privacy of data holders. It is an effective approach to verify data ownership and check duplicate storage with secure challenge and big data support. We integrate cloud data deduplication with data access control in a simple way, thus reconciling data deduplication and encryption.We prove the security and assess the performance through analysis and simulation. The results show its efficiency, effectiveness and applicability.In this proposed system the upload data will be stored on the cloud based on date.This means that it has to be available to the data holder who need it when they need it. The web log record represents whether the keyword is repeated or not. Records with only repeated search data are retained in primary storage in cloud. All the other records are stored in temporary storage server. This step reduces the size of the web log thereby avoids the burden on the memory and speeds up the analysis.


Author(s):  
V. O. Chesnokov

Online social networks are one of the main platforms for arbitrary subjects of discussion. They are one of the main sources of data to analyse public opinion. For crawling and analysis of data from online social networks, are used data monitoring systems, which include a data collecting system. A typical system for collecting data from the Internet contains a crawler, parsers, a collection queue of tasks, a task scheduling subsystem, and a module for writing structured data to a storage system. The crawling from online social networks has a number of features. The paper considers methods of access to data from online social networks and a task planning subsystem. Formulates and underpins the requirements for a data collecting system to provide crawl results from online social networks, namely scalability, extensibility, and availability of a data storage subsystem and a queue of collection tasks.Describes main data accessing methods to have information from online social networks: API-based access, access through processing of HTML-pages and specialised interfaces for bots. Provides a description of main restrictions, which an online social network imposes, namely the need to register the application, the limited number of requests, the need to obtain user‘s permission to collect his (her) data. According to the analysis results, the anonymous download and processing of HTML pages were chosen, as a data access method.Formulates the task subsystem requirements, namely available types, hierarchy, and context of the task to be done. Describes the general architecture of the developed software system for crawling and analysis of data from online social networks, justifies its compliance with the earlier raised requirements.The problem of crawling and analysis of users’ ego-network graphs (sub-graphs of a social graph) are considered. Their collecting features are described and options of implementation are proposed depending on the amount of data collected.The results obtained can be used to build monitoring systems for online social networks and collect test data for experimentally estimated algorithms of social graphs analysis. Further development may be concerned with a detailed consideration of the problems of collecting other types of data from online social networks.


2021 ◽  
Vol 13 (9) ◽  
pp. 1815
Author(s):  
Xiaohua Zhou ◽  
Xuezhi Wang ◽  
Yuanchun Zhou ◽  
Qinghui Lin ◽  
Jianghua Zhao ◽  
...  

With the remarkable development and progress of earth-observation techniques, remote sensing data keep growing rapidly and their volume has reached exabyte scale. However, it's still a big challenge to manage and process such huge amounts of remote sensing data with complex and diverse structures. This paper designs and realizes a distributed storage system for large-scale remote sensing data storage, access, and retrieval, called RSIMS (remote sensing images management system), which is composed of three sub-modules: RSIAPI, RSIMeta, RSIData. Structured text metadata of different remote sensing images are all stored in RSIMeta based on a set of uniform models, and then indexed by the distributed multi-level Hilbert grids for high spatiotemporal retrieval performance. Unstructured binary image files are stored in RSIData, which provides large scalable storage capacity and efficient GDAL (Geospatial Data Abstraction Library) compatible I/O interfaces. Popular GIS software and tools (e.g., QGIS, ArcGIS, rasterio) can access data stored in RSIData directly. RSIAPI provides users a set of uniform interfaces for data access and retrieval, hiding the complex inner structures of RSIMS. The test results show that RSIMS can store and manage large amounts of remote sensing images from various sources with high and stable performance, and is easy to deploy and use.


2021 ◽  
Vol 251 ◽  
pp. 02035
Author(s):  
Adrian Eduard Negru ◽  
Latchezar Betev ◽  
Mihai Carabaș ◽  
Costin Grigoraș ◽  
Nicolae Țăpuş ◽  
...  

CERN uses the world’s largest scientific computing grid, WLCG, for distributed data storage and processing. Monitoring of the CPU and storage resources is an important and essential element to detect operational issues in its systems, for example in the storage elements, and to ensure their proper and efficient function. The processing of experiment data depends strongly on the data access quality, as well as its integrity and both of these key parameters must be assured for the data lifetime. Given the substantial amount of data, O(200 PB), already collected by ALICE and kept at various storage elements around the globe, scanning every single data chunk would be a very expensive process, both in terms of computing resources usage and in terms of execution time. In this paper, we describe a distributed file crawler that addresses these natural limits by periodically extracting and analyzing statistically significant samples of files from storage elements, evaluates the results and is integrated with the existing monitoring solution, MonALISA.


2020 ◽  
Vol 245 ◽  
pp. 04011
Author(s):  
Ofer Rind ◽  
Hironori Ito ◽  
Guangwei Che ◽  
Tim Chou ◽  
Robert Hancock ◽  
...  

Large scientific data centers have recently begun providing a number of different types of data storage in order to satisfy the various needs of their users. Users with interactive accounts, for example, might want a POSIX interface for easy access to the data from their interactive machines. Grid computing sites, on the other hand, likely need to provide an X509-based storage protocol, like SRM and GridFTP, since the data management system is built upon them. Meanwhile, an experiment producing large amounts of data typically demands a service that provides archival storage for the safe keeping of their unique data. To access these various types of data, users must use specific sets of commands tailored to their respective storage, making access to their data complex and difficult. BNLBox is an attempt to provide a unified and easy to use storage service for all BNL users, to store their important documents, code and data. It is a cloud storage system with an intuitive web interface for novice users. It provides an automated synchronization feature that enables users to upload data to their cloud storage without manual intervention, freeing them to focus on analysis rather than data management software. It provides a POSIX interface for local interactive users, which simplifies data access from batch jobs as well. At the same time, it also provides users with a straightforward mechanism for archiving large data sets for later processing. The storage space can be used for both code and data within the compute job environment. This paper will describe various aspects of the BNLBox storage service.


Sign in / Sign up

Export Citation Format

Share Document