scholarly journals Data-driven approaches for modeling train control models: Comparison and case studies

2020 ◽  
Vol 98 ◽  
pp. 349-363 ◽  
Author(s):  
Jiateng Yin ◽  
Shuai Su ◽  
Jing Xun ◽  
Tao Tang ◽  
Ronghui Liu
Author(s):  
Dave Schlesinger

A 1969 collision of two Penn Central train resulted in four fatalities and forty-five injuries. This accident could have been prevented, had some type of train control system been in place. After this accident, the National Transportation Safety Board (NTSB) asked the Federal Railroad Administration (FRA) to study the feasibility of requiring railroads to install some type of automatic train control system that would prevent human-factor caused accidents. Over the next almost four decades, a number of additional accidents occurred, culminating in the January, 2005 Graniteville Norfolk-Southern accident and the September, 2008 Metrolink Chatsworth accident. A little more than one month after the Metrolink accident, Congress passed the Rail Safety Improvement Act, which requires Positive Train Control (PTC). To better explain the positive train control requirements, this paper traces each to a detailed case study. Four different accidents are studied, each being an example of one of the four, core positive train control requirements. Included in the case study is a discussion about how positive train control would have prevented the accident, had it been present. This provides positive train control implementers and other railroad professionals with a better understanding of the factors that have caused or contributed to the cause of the positive train control preventable accidents studied.


2019 ◽  
Vol 160 ◽  
pp. 106204 ◽  
Author(s):  
Jiangyu Wang ◽  
Shuai Li ◽  
Huanxin Chen ◽  
Yue Yuan ◽  
Yao Huang

Author(s):  
M. Meijer ◽  
L. A. E. Vullings ◽  
J. D. Bulens ◽  
F. I. Rip ◽  
M. Boss ◽  
...  

Although by many perceived as important, spatial data quality has hardly ever been taken centre stage unless something went wrong due to bad quality. However, we think this is going to change soon. We are more and more relying on data driven processes and due to the increased availability of data, there is a choice in what data to use. How to make that choice? We think spatial data quality has potential as a selection criterion. <br><br> In this paper we focus on how a workflow tool can help the consumer as well as the producer to get a better understanding about which product characteristics are important. For this purpose, we have developed a framework in which we define different roles (consumer, producer and intermediary) and differentiate between product specifications and quality specifications. A number of requirements is stated that can be translated into quality elements. We used case studies to validate our framework. This framework is designed following the fitness for use principle. Also part of this framework is software that in some cases can help ascertain the quality of datasets.


2021 ◽  
Author(s):  
Yutao Kuang ◽  
Jolene Reid

Organometallic intermediates participate in many multi-catalytic enantioselective transformations directed by a chiral catalyst, but the requirement of optimizing two catalyst components is a significant barrier to widely adopting this approach for chiral molecule synthesis. Algorithms can potentially accelerate the screening process by developing quantitative structure-function relationships from large experimental datasets. However, the chemical data available in this catalyst space is limited. We report a data-driven strategy that effectively translates selectivity relationships trained on enantioselectivity outcomes derived from one catalyst reaction systems where an abundance of data exists, to synergistic catalyst space. We describe three case studies involving different modes of catalysis (Brønsted acid, chiral anion, and secondary amine) that substantiate the prospect of this approach to predict and elucidate selectivity in reactions where more than one catalyst is involved. Ultimately, the success in applying our approach to diverse areas of asymmetric catalysis implies that this general workflow should find broad use in the study and development of new enantioselective, multi-catalytic processes.


2019 ◽  
Vol 13 (1) ◽  
pp. 163-214 ◽  
Author(s):  
Tino T. Herden

AbstractThe purpose of this paper is to provide a theory-based explanation for the generation of competitive advantage from Analytics and to examine this explanation with evidence from confirmatory case studies. A theoretical argumentation for achieving sustainable competitive advantage from knowledge unfolding in the knowledge-based view forms the foundation for this explanation. Literature about the process of Analytics initiatives, surrounding factors, and conditions, and benefits from Analytics are mapped onto the knowledge-based view to derive propositions. Eight confirmatory case studies of organizations mature in Analytics were collected, focused on Logistics and Supply Chain Management. A theoretical framework explaining the creation of competitive advantage from Analytics is derived and presented with an extensive description and rationale. This highlights various aspects outside of the analytical methods contributing to impactful and successful Analytics initiatives. Thereby, the relevance of a problem focus and iterative solving of the problem, especially with incorporation of user feedback, is justified and compared to other approaches. Regarding expertise, the advantage of cross-functional teams over data scientist centric initiatives is discussed, as well as modes and reasons of incorporating external expertise. Regarding the deployment of Analytics solutions, the importance of consumability, users assuming responsibility of incorporating solutions into their processes, and an innovation promoting culture (as opposed to a data-driven culture) are described and rationalized. Further, this study presents a practical manifestation of the knowledge-based view.


Author(s):  
Heather M. Reynolds ◽  
A. Tina Wagle

With a focus on data driven decision making, teacher education programs need to prepare preservice teachers to analyze data while modeling data driven practices in our own programs. Research has demonstrated the effectiveness of using case studies to promote critical thinking, analysis and interpretation, and higher order thinking. This study utilized the results from surveys of residents enrolled in a clinically rich residency program to develop and implement relevant case studies for use in program coursework. The utility of using case studies in graduate coursework was evaluated through a survey of current residents. The theoretical and practical value of creating case studies based on program specific challenges, and examples of the case studies that were generated from this data will be shared.


Corpora ◽  
2008 ◽  
Vol 3 (1) ◽  
pp. 59-81 ◽  
Author(s):  
Stefan Th. Gries ◽  
Martin Hilpert

In this paper, we introduce a data-driven bottom-up clustering method for the identification of stages in diachronic corpus data that differ from each other quantitatively. Much like regular approaches to hierarchical clustering, it is based on identifying and merging the most cohesive groups of data points, but, unlike regular approaches to clustering, it allows for the merging of temporally adjacent data, thus, in effect, preserving the chronological order. We exemplify the method with two case studies, one on verbal complementation of shall, the other on the development of the perfect in English.


Sign in / Sign up

Export Citation Format

Share Document