Simulation of Longitudinal Train Dynamics: Case Studies Using the Train Energy and Dynamics Simulator (TEDS)

Author(s):  
Monique F. Stewart ◽  
S. K. (John) Punwani ◽  
David R. Andersen ◽  
Graydon F. Booth ◽  
Som P. Singh ◽  
...  

Longitudinal dynamics influence several measures of train performance, including schedules and energy efficiency, stopping distances, run-in/run-out forces, etc. Therefore, an effective set of tools for studying longitudinal dynamics is essential to improving the safety and performance of train operations. Train Energy and Dynamics Simulator (TEDS) is a state-of-the-art software program designed and developed by the Federal Railroad Administration (FRA), for studying and simulating train safety and performance, and can be used for modeling train performance under a wide variety of equipment, track, and operating configurations [1]. Several case studies and real-world applications of TEDS, including the investigation of multiple train make-up and train handling related derailments, a study of train stopping distances, evaluations of the safety benefits of Electronically Controlled Pneumatic (ECP) brakes, Distributed Power operations, and a study of alternate train handling methodologies are described in this paper. These studies demonstrate the effectiveness of using the appropriate simulation tools to quantify and enhance a better understanding of train dynamics, and the resultant safety benefits.

Author(s):  
Leonardo B. Baruffaldi ◽  
Auteliano A. dos Santos

Engineers pay great attention to comfort and performance issues, which are related to passenger trains suspension systems. Complex active shock-absorbing devices are developed and modern simulation tools are employed to determine car body vibrations and ride behavior. Freight train suspensions, however, were not given the same focus, presenting the same basic design for about 70 years now. Recent increases in pay-loads and train lines speed, and growing pressures to decrease maintenance costs, are slowly changing this scenario in such a way that numerical simulation methods are being more and more used. Most commercially available simulation software used by train manufacturers to address full vehicle behavior treats the friction wedge — the main damping element in the three-piecetruck suspension — as a weightless unidirectional force element like springs and dampers, which connects the wheel frames to the bolster that supports carbody load. This paper uses an improved friction wedge model to emphasize the importance of considering nonlinear characteristics of friction damping to vertical and longitudinal dynamics of a freight truck wagon modeled with multi-body dynamics.


Author(s):  
Erik Paul ◽  
Holger Herzog ◽  
Sören Jansen ◽  
Christian Hobert ◽  
Eckhard Langer

Abstract This paper presents an effective device-level failure analysis (FA) method which uses a high-resolution low-kV Scanning Electron Microscope (SEM) in combination with an integrated state-of-the-art nanomanipulator to locate and characterize single defects in failing CMOS devices. The presented case studies utilize several FA-techniques in combination with SEM-based nanoprobing for nanometer node technologies and demonstrate how these methods are used to investigate the root cause of IC device failures. The methodology represents a highly-efficient physical failure analysis flow for 28nm and larger technology nodes.


Author(s):  
Inzamam Mashood Nasir ◽  
Muhammad Rashid ◽  
Jamal Hussain Shah ◽  
Muhammad Sharif ◽  
Muhammad Yahiya Haider Awan ◽  
...  

Background: Breast cancer is considered as the most perilous sickness among females worldwide and the ratio of new cases is expanding yearly. Many researchers have proposed efficient algorithms to diagnose breast cancer at early stages, which have increased the efficiency and performance by utilizing the learned features of gold standard histopathological images. Objective: Most of these systems have either used traditional handcrafted features or deep features which had a lot of noise and redundancy, which ultimately decrease the performance of the system. Methods: A hybrid approach is proposed by fusing and optimizing the properties of handcrafted and deep features to classify the breast cancer images. HOG and LBP features are serially fused with pretrained models VGG19 and InceptionV3. PCR and ICR are used to evaluate the classification performance of proposed method. Results: The method concentrates on histopathological images to classify the breast cancer. The performance is compared with state-of-the-art techniques, where an overall patient-level accuracy of 97.2% and image-level accuracy of 96.7% is recorded. Conclusion: The proposed hybrid method achieves the best performance as compared to previous methods and it can be used for the intelligent healthcare systems and early breast cancer detection.


2019 ◽  
Vol 13 (2) ◽  
pp. 14-31
Author(s):  
Mamdouh Alenezi ◽  
Muhammad Usama ◽  
Khaled Almustafa ◽  
Waheed Iqbal ◽  
Muhammad Ali Raza ◽  
...  

NoSQL-based databases are attractive to store and manage big data mainly due to high scalability and data modeling flexibility. However, security in NoSQL-based databases is weak which raises concerns for users. Specifically, security of data at rest is a high concern for the users deployed their NoSQL-based solutions on the cloud because unauthorized access to the servers will expose the data easily. There have been some efforts to enable encryption for data at rest for NoSQL databases. However, existing solutions do not support secure query processing, and data communication over the Internet and performance of the proposed solutions are also not good. In this article, the authors address NoSQL data at rest security concern by introducing a system which is capable to dynamically encrypt/decrypt data, support secure query processing, and seamlessly integrate with any NoSQL- based database. The proposed solution is based on a combination of chaotic encryption and Order Preserving Encryption (OPE). The experimental evaluation showed excellent results when integrated the solution with MongoDB and compared with the state-of-the-art existing work.


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4089
Author(s):  
Kaiqiang Zhang ◽  
Dongyang Ou ◽  
Congfeng Jiang ◽  
Yeliang Qiu ◽  
Longchuan Yan

In terms of power and energy consumption, DRAMs play a key role in a modern server system as well as processors. Although power-aware scheduling is based on the proportion of energy between DRAM and other components, when running memory-intensive applications, the energy consumption of the whole server system will be significantly affected by the non-energy proportion of DRAM. Furthermore, modern servers usually use NUMA architecture to replace the original SMP architecture to increase its memory bandwidth. It is of great significance to study the energy efficiency of these two different memory architectures. Therefore, in order to explore the power consumption characteristics of servers under memory-intensive workload, this paper evaluates the power consumption and performance of memory-intensive applications in different generations of real rack servers. Through analysis, we find that: (1) Workload intensity and concurrent execution threads affects server power consumption, but a fully utilized memory system may not necessarily bring good energy efficiency indicators. (2) Even if the memory system is not fully utilized, the memory capacity of each processor core has a significant impact on application performance and server power consumption. (3) When running memory-intensive applications, memory utilization is not always a good indicator of server power consumption. (4) The reasonable use of the NUMA architecture will improve the memory energy efficiency significantly. The experimental results show that reasonable use of NUMA architecture can improve memory efficiency by 16% compared with SMP architecture, while unreasonable use of NUMA architecture reduces memory efficiency by 13%. The findings we present in this paper provide useful insights and guidance for system designers and data center operators to help them in energy-efficiency-aware job scheduling and energy conservation.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Geraldine Cáceres Sepúlveda ◽  
Silvia Ochoa ◽  
Jules Thibault

AbstractDue to the highly competitive market and increasingly stringent environmental regulations, it is paramount to operate chemical processes at their optimal point. In a typical process, there are usually many process variables (decision variables) that need to be selected in order to achieve a set of optimal objectives for which the process will be considered to operate optimally. Because some of the objectives are often contradictory, Multi-objective optimization (MOO) can be used to find a suitable trade-off among all objectives that will satisfy the decision maker. The first step is to circumscribe a well-defined Pareto domain, corresponding to the portion of the solution domain comprised of a large number of non-dominated solutions. The second step is to rank all Pareto-optimal solutions based on some preferences of an expert of the process, this step being performed using visualization tools and/or a ranking algorithm. The last step is to implement the best solution to operate the process optimally. In this paper, after reviewing the main methods to solve MOO problems and to select the best Pareto-optimal solution, four simple MOO problems will be solved to clearly demonstrate the wealth of information on a given process that can be obtained from the MOO instead of a single aggregate objective. The four optimization case studies are the design of a PI controller, an SO2 to SO3 reactor, a distillation column and an acrolein reactor. Results of these optimization case studies show the benefit of generating and using the Pareto domain to gain a deeper understanding of the underlying relationships between the various process variables and performance objectives.


Sign in / Sign up

Export Citation Format

Share Document