scholarly journals BIG DATA FOR RISK ANALYSIS: THE FUTURE OF SAFE RAILWAYS

Author(s):  
Miguel Figueres Esteban

New technology brings ever more data to support decision-making for intelligent transport systems. Big Data is no longer a futuristic challenge, it is happening right now: modern railway systems have countless sources of data providing a massive quantity of diverse information on every aspect of operations such as train position and speed, brake applications, passenger numbers, status of the signaling system or reported incidents.The traditional approaches to safety management on the railways have relied on static data sources to populate traditional safety tools such as bow-tie models and fault trees. The Big Data Risk Analysis (BDRA) program for Railways at the University of Huddersfield is investigating how the many Big Data sources from the railway can be combined in a meaningful way to provide a better understanding about the GB railway systems and the environment within which they operate.Moving to BDRA is not simply a matter of scaling-up existing analysis techniques. BDRA has to coordinate and combine a wide range of sources with different types of data and accuracy, and that is not straight-forward. BDRA is structured around three components: data, ontology and visualisation. Each of these components is critical to support the overall framework. This paper describes how these three components are used to get safety knowledge from two data sources by means of ontologies from text documents. This is a part of the ongoing BDRA research that is looking at integrating many large and varied data sources to support railway safety and decision-makers.DOI: http://dx.doi.org/10.4995/CIT2016.2016.1825

Author(s):  
Anh D. Ta ◽  
Marcus Tanque ◽  
Montressa Washington

Given the emergence of big data technology and its rising popularity, it is important to ensure that the use of this avant-garde technology directly addresses the enterprise goals which are required to maximize the Return-On-Investment (ROI). This chapter aims to address a specification framework for the process of transforming enterprise data into wisdom or actionable information through the use of big data technology. The framework is based on proven methodologies, which consist of three components: Specify, Design, and Refine. The recommended framework provides a systematic, top-down process to extrapolate big data requirements from high-level technical and enterprise goals. The framework also provides a process for managing the quality and relationship between raw data sources and big data products.


Author(s):  
Imadeddine Mountasser ◽  
Brahim Ouhbi ◽  
Bouchra Frikh ◽  
Ferdaous Hdioud

Nowadays, people and things are becoming permanently interconnected. This interaction overloaded the world with an incredible digital data deluge—termed big data—generated from a wide range of data sources. Indeed, big data has invaded the domain of tourism as a source of innovation that serves to better understand tourists' behavior and enhance tourism destination management and marketing. Thus, tourism stakeholders have increasingly leveraging tourism-related big data sources to gather abundant information concerning all tourism industry axes. However, big data has several complexity aspects and brings commensurate challenges that go along with its exploitation. It has specifically changed the way data is acquired and managed, which may influence the nature and the quality of the conducted analyses and the made decisions. Thus, this article investigates the big data acquisition process and thoroughly identifies its challenges and requirements. It also reveals its current state-of-the-art protocols and frameworks.


Web Services ◽  
2019 ◽  
pp. 639-656
Author(s):  
Anh D. Ta ◽  
Marcus Tanque ◽  
Montressa Washington

Given the emergence of big data technology and its rising popularity, it is important to ensure that the use of this avant-garde technology directly addresses the enterprise goals which are required to maximize the Return-On-Investment (ROI). This chapter aims to address a specification framework for the process of transforming enterprise data into wisdom or actionable information through the use of big data technology. The framework is based on proven methodologies, which consist of three components: Specify, Design, and Refine. The recommended framework provides a systematic, top-down process to extrapolate big data requirements from high-level technical and enterprise goals. The framework also provides a process for managing the quality and relationship between raw data sources and big data products.


Author(s):  
Vladislav Andreyevich Shcherbakov ◽  
◽  
Anna Nikolaevna Agafonova ◽  

The essence of the concept of BIG DATA is revealed, possible data sources and functional features of the new technology are investigated. A comparative analysis of the traditional analytical approach and the approach based on the application of BIG DATA is presented. Identified prospects for the use of technology in modern marketing УДК 004.043


Author(s):  
E.D. Wolf

Most microelectronics devices and circuits operate faster, consume less power, execute more functions and cost less per circuit function when the feature-sizes internal to the devices and circuits are made smaller. This is part of the stimulus for the Very High-Speed Integrated Circuits (VHSIC) program. There is also a need for smaller, more sensitive sensors in a wide range of disciplines that includes electrochemistry, neurophysiology and ultra-high pressure solid state research. There is often fundamental new science (and sometimes new technology) to be revealed (and used) when a basic parameter such as size is extended to new dimensions, as is evident at the two extremes of smallness and largeness, high energy particle physics and cosmology, respectively. However, there is also a very important intermediate domain of size that spans from the diameter of a small cluster of atoms up to near one micrometer which may also have just as profound effects on society as “big” physics.


2019 ◽  
pp. 5-22
Author(s):  
Szymon Buczyński

Recent technological revolutions in data and communication systemsenable us to generate and share data much faster than ever before. Sophisticated data tools aim to improve knowledge and boost confdence. That technological tools will only get better and user-friendlier over the years, big datacan be considered an important tool for the arts and culture sector. Statistical analysis, econometric methods or data mining techniques could pave theway towards better understanding of the mechanisms occurring on the artmarket. Moreover crime reduction and prevention challenges in today’sworld are becoming increasingly complex and are in need of a new techniquethat can handle the vast amount of information that is being generated. Thisarticle provides an examination of a wide range of new technological innovations (IT) that have applications in the areas of culture preservation andheritage protection. The author provides a description of recent technological innovations, summarize the available research on the extent of adoptionon selected examples, and then review the available research on the eachform of new technology. Furthermore the aim of this paper is to explore anddiscuss how big data analytics affect innovation and value creation in cultural organizations and shape consumer behavior in cultural heritage, arts andcultural industries. This paper discusses also the likely impact of big dataanalytics on criminological research and theory. Digital criminology supports huge data base in opposition to conventional data processing techniques which are not only in suffcient but also out dated. This paper aims atclosing a gap in the academic literature showing the contribution of a bigdata approach in cultural economics, policy and management both froma theoretical and practice-based perspective. This work is also a startingpoint for further research.


2020 ◽  
Author(s):  
Bankole Olatosi ◽  
Jiajia Zhang ◽  
Sharon Weissman ◽  
Zhenlong Li ◽  
Jianjun Hu ◽  
...  

BACKGROUND The Coronavirus Disease 2019 (COVID-19) caused by the severe acute respiratory syndrome coronavirus (SARS-CoV-2) remains a serious global pandemic. Currently, all age groups are at risk for infection but the elderly and persons with underlying health conditions are at higher risk of severe complications. In the United States (US), the pandemic curve is rapidly changing with over 6,786,352 cases and 199,024 deaths reported. South Carolina (SC) as of 9/21/2020 reported 138,624 cases and 3,212 deaths across the state. OBJECTIVE The growing availability of COVID-19 data provides a basis for deploying Big Data science to leverage multitudinal and multimodal data sources for incremental learning. Doing this requires the acquisition and collation of multiple data sources at the individual and county level. METHODS The population for the comprehensive database comes from statewide COVID-19 testing surveillance data (March 2020- till present) for all SC COVID-19 patients (N≈140,000). This project will 1) connect multiple partner data sources for prediction and intelligence gathering, 2) build a REDCap database that links de-identified multitudinal and multimodal data sources useful for machine learning and deep learning algorithms to enable further studies. Additional data will include hospital based COVID-19 patient registries, Health Sciences South Carolina (HSSC) data, data from the office of Revenue and Fiscal Affairs (RFA), and Area Health Resource Files (AHRF). RESULTS The project was funded as of June 2020 by the National Institutes for Health. CONCLUSIONS The development of such a linked and integrated database will allow for the identification of important predictors of short- and long-term clinical outcomes for SC COVID-19 patients using data science.


Author(s):  
Marco Angrisani ◽  
Anya Samek ◽  
Arie Kapteyn

The number of data sources available for academic research on retirement economics and policy has increased rapidly in the past two decades. Data quality and comparability across studies have also improved considerably, with survey questionnaires progressively converging towards common ways of eliciting the same measurable concepts. Probability-based Internet panels have become a more accepted and recognized tool to obtain research data, allowing for fast, flexible, and cost-effective data collection compared to more traditional modes such as in-person and phone interviews. In an era of big data, academic research has also increasingly been able to access administrative records (e.g., Kostøl and Mogstad, 2014; Cesarini et al., 2016), private-sector financial records (e.g., Gelman et al., 2014), and administrative data married with surveys (Ameriks et al., 2020), to answer questions that could not be successfully tackled otherwise.


Sign in / Sign up

Export Citation Format

Share Document