scholarly journals OPTIBOX - Software Tool for the OPTImal distribution of hot BOx aXle detectors

Author(s):  
Cecília Vale ◽  
Carlos Saborido Amate ◽  
Cristiana Bonifácio

Axle bearings may constitute a critical component with regards to safety due to the fact that they can present sudden failures. Hot box detectors are wayside devices that aim at identifying axle bearings with a high potential of failure. Therefore, it is important to place these sensors along the network in order to minimize the risk of axle bearing failures that could derive in train derailments. How many and where to install these wayside devices depends on the requirements of each country and on the available investment capacity. However, there is no tool in the market that helps the Infrastructure Managers to prioritize locations for hot box detectors. In this context, the OPTIBOX tool that is presented in this article appears as useful and easy-to-use tool to guide Infrastructure Managers in the selection of the most appropriate locations for hot box detectors according to historical data of the line and its main relevant characteristics, such as speed, type of trains or volume of traffic.

2014 ◽  
Vol 797 ◽  
pp. 117-122 ◽  
Author(s):  
Carolina Bermudo ◽  
F. Martín ◽  
Lorenzo Sevilla

It has been established, in previous studies, the best adaptation and solution for the implementation of the modular model, being the current choice based on the minimization of the p/2k dimensionless relation obtained for each one of the model, analyzed under the same boundary conditions and efforts. Among the different cases covered, this paper shows the study for the optimal choice of the geometric distribution of zones. The Upper Bound Theorem (UBT) by its Triangular Rigid Zones (TRZ) consideration, under modular distribution, is applied to indentation processes. To extend the application of the model, cases of different thicknesses are considered


2021 ◽  
Author(s):  
Maximilian Peter Dammann ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Abstract Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined with respect to their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality and the level of detail can be controlled by the automated choice of transformation parameters. We present a software tool in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details and the creation of UV maps. Flexibility, transformation quality and time savings are described and discussed.


Author(s):  
Maximilian Peter Dammann ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Abstract Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined concerning their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process, it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality, and the level of detail can be controlled by the automated choice of transformation parameters. Through this approach, tedious preparation tasks and iterative performance optimization can be avoided in the future, which also simplifies the integration of AR/VR applications into product development and use. A software tool is presented in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details, and the creation of UV maps. Flexibility, transformation quality, and timesavings are described and discussed.


2015 ◽  
Vol 56 (1) ◽  
pp. 59-83
Author(s):  
Dafydd Gibbon ◽  
Katarzyna Klessa ◽  
Jolanta Bachan

AbstractThe study of speech timing, i.e. the duration and speed or tempo of speech events, has increased in importance over the past twenty years, in particular in connection with increased demands for accuracy, intelligibility and naturalness in speech technology, with applications in language teaching and testing, and with the study of speech timing patterns in language typology. H owever, the methods used in such studies are very diverse, and so far there is no accessible overview of these methods. Since the field is too broad for us to provide an exhaustive account, we have made two choices: first, to provide a framework of paradigmatic (classificatory), syntagmatic (compositional) and functional (discourse-oriented) dimensions for duration analysis; and second, to provide worked examples of a selection of methods associated primarily with these three dimensions. Some of the methods which are covered are established state-of-the-art approaches (e.g. the paradigmatic Classification and Regression Trees, CART , analysis), others are discussed in a critical light (e.g. so-called ‘rhythm metrics’). A set of syntagmatic approaches applies to the tokenisation and tree parsing of duration hierarchies, based on speech annotations, and a functional approach describes duration distributions with sociolinguistic variables. Several of the methods are supported by a new web-based software tool for analysing annotated speech data, the Time Group Analyser.


2012 ◽  
Vol 253-255 ◽  
pp. 2091-2096
Author(s):  
Yan Feng Tang ◽  
Hui Mei Li ◽  
Xiang Kai Liu ◽  
Shao Qing Liu

Bayesian method was introduced and leaded into the vehicle fault data processing. The parameter estimation and the selection of the optimal distribution model based on Bayesian method were studied, and an example was given. The references are provided for the application of Bayesian method in the large complicated systems, such as vehicle equipments.


2020 ◽  
Vol 163 (3) ◽  
pp. 1267-1285 ◽  
Author(s):  
Jens Kiesel ◽  
Philipp Stanzel ◽  
Harald Kling ◽  
Nicola Fohrer ◽  
Sonja C. Jähnig ◽  
...  

AbstractThe assessment of climate change and its impact relies on the ensemble of models available and/or sub-selected. However, an assessment of the validity of simulated climate change impacts is not straightforward because historical data is commonly used for bias-adjustment, to select ensemble members or to define a baseline against which impacts are compared—and, naturally, there are no observations to evaluate future projections. We hypothesize that historical streamflow observations contain valuable information to investigate practices for the selection of model ensembles. The Danube River at Vienna is used as a case study, with EURO-CORDEX climate simulations driving the COSERO hydrological model. For each selection method, we compare observed to simulated streamflow shift from the reference period (1960–1989) to the evaluation period (1990–2014). Comparison against no selection shows that an informed selection of ensemble members improves the quantification of climate change impacts. However, the selection method matters, with model selection based on hindcasted climate or streamflow alone is misleading, while methods that maintain the diversity and information content of the full ensemble are favorable. Prior to carrying out climate impact assessments, we propose splitting the long-term historical data and using it to test climate model performance, sub-selection methods, and their agreement in reproducing the indicator of interest, which further provide the expectable benchmark of near- and far-future impact assessments. This test is well-suited to be applied in multi-basin experiments to obtain better understanding of uncertainty propagation and more universal recommendations regarding uncertainty reduction in hydrological impact studies.


1987 ◽  
Vol 2 (1) ◽  
pp. 55-63 ◽  
Author(s):  
Brian P. Bloomfield

AbstractThis paper examines the claim that machine induction can alleviate the current knowledge engineering bottleneck in expert system construction. It presents a case study of the rule induction software tool known as Expert-Ease and proposes a set of criteria which might guide the selection of appropriate domains.


Sign in / Sign up

Export Citation Format

Share Document