scholarly journals Pedestrian-Vehicle Accidents Reconstruction with PC-Crash®: Sensibility Analysis of Factors Variation

Author(s):  
Francisco Martínez Gala

This paper describes the main findings of a study performed by INSIA-UPM about the improvement of the reconstruction process of real world vehicle-pedestrian accidents using PC-Crash® software, aimed to develop a software tool for the estimation of the variability of the collision speed due to the lack of real values of some parameters required during the reconstruction task. The methodology has been based on a sensibility analysis of the factors variation. A total of 9 factors have been analyzed with the objective of identifying which ones were significant. Four of them (pedestrian height, collision angle, hood height and pedestrian-road friction coefficient) were significant and were included in a full factorial experiment with the collision speed as an additional factor in order to obtain a regression model with up to third level interactions. Two different factorial experiments with the same structure have been performed because of pedestrian gender differences. The tool has been created as a collision speed predictor based on the regression models obtained, using the 4 significant factors and the projection distance measured or estimated in the accident site. The tool has been used on the analysis of real-world reconstructed accidents occurred in the city of Madrid (Spain). The results have been adequate in most cases with less than 10% of deviation between the predicted speed and the one estimated in the reconstructions.DOI: http://dx.doi.org/10.4995/CIT2016.2016.3467

2021 ◽  
Vol 11 (6) ◽  
pp. 478
Author(s):  
Ching Chang ◽  
Chien-Hao Huang ◽  
Hsiao-Jung Tseng ◽  
Fang-Chen Yang ◽  
Rong-Nan Chien

Background: Hepatic encephalopathy (HE), a neuropsychiatric complication of decompensated cirrhosis, is associated with high mortality and high risk of recurrence. Rifaximin add-on to lactulose for 3 to 6 months is recommended for the prevention of recurrent episodes of HE after the second episode. However, whether the combination for more than 6 months is superior to lactulose alone in the maintenance of HE remission is less evident. Therefore, the aim of this study is to evaluate the one-year efficacy of rifaximin add-on to lactulose for the maintenance of HE remission in Taiwan. Methods: We conducted a real-world single-center retrospective cohort study to compare the long-term efficacy of rifaximin add-on to lactulose (group R + L) versus lactulose alone (group L, control group). Furthermore, the treatment efficacy before and after rifaximin add-on to lactulose was also analyzed. The primary endpoint of our study was time to first HE recurrence (Conn score ≥ 2). All patients were followed up every three months until death, and censored at one year if still alive. Results and Conclusions: 12 patients were enrolled in group R + L. Another 31 patients were stratified into group L. Sex, comorbidity, ammonia level, and ascites grade were matched while age, HE grade, and model for end-stage liver disease (MELD) score were adjusted in the multivariable logistic regression model. Compared with group L, significant improvement in the maintenance of HE remission and decreased episodes and days of HE-related hospitalizations were demonstrated in group R + L. The serum ammonia levels were significantly lower at the 3rd and 6th month in group 1. Concerning changes before and after rifaximin add-on in group R + L, mini-mental status examination (MMSE), episodes of hospitalization, and variceal bleeding also improved at 6 and 12 months. Days of hospitalization, serum ammonia levels also improved at 6th month. Except for concern over price, no patients discontinued rifaximin due to adverse events or complications. The above results provide evidence for the one-year use of rifaximin add-on to lactulose in reducing HE recurrence and HE-related hospitalization for patients with decompensated cirrhosis.


2020 ◽  
Vol 36 (S1) ◽  
pp. 37-37
Author(s):  
Americo Cicchetti ◽  
Rossella Di Bidino ◽  
Entela Xoxi ◽  
Irene Luccarini ◽  
Alessia Brigido

IntroductionDifferent value frameworks (VFs) have been proposed in order to translate available evidence on risk-benefit profiles of new treatments into Pricing & Reimbursement (P&R) decisions. However limited evidence is available on the impact of their implementation. It's relevant to distinguish among VFs proposed by scientific societies and providers, which usually are applicable to all treatments, and VFs elaborated by regulatory agencies and health technology assessment (HTA), which focused on specific therapeutic areas. Such heterogeneity in VFs has significant implications in terms of value dimension considered and criteria adopted to define or support a price decision.MethodsA literature research was conducted to identify already proposed or adopted VF for onco-hematology treatments. Both scientific and grey literature were investigated. Then, an ad hoc data collection was conducted for multiple myeloma; breast, prostate and urothelial cancer; and Non Small Cell Lung Cancer (NSCLC) therapies. Pharmaceutical products authorized by European Medicines Agency from January 2014 till December 2019 were identified. Primary sources of data were European Public Assessment Reports and P&R decision taken by the Italian Medicines Agency (AIFA) till September 2019.ResultsThe analysis allowed to define a taxonomy to distinguish categories of VF relevant to onco-hematological treatments. We identified the “real-world” VF that emerged given past P&R decisions taken at the Italian level. Data was collected both for clinical and economical outcomes/indicators, as well as decisions taken on innovativeness of therapies. Relevant differences emerge between the real world value framework and the one that should be applied given the normative framework of the Italian Health System.ConclusionsThe value framework that emerged from the analysis addressed issues of specific aspects of onco-hematological treatments which emerged during an ad hoc analysis conducted on treatment authorized in the last 5 years. The perspective adopted to elaborate the VF was the one of an HTA agency responsible for P&R decisions at a national level. Furthermore, comparing a real-world value framework with the one based on the general criteria defined by the national legislation, our analysis allowed identification of the most critical point of the current national P&R process in terms ofsustainability of current and future therapies as advance therapies and agnostic-tumor therapies.


1976 ◽  
Vol 49 (3) ◽  
pp. 862-908 ◽  
Author(s):  
K. A. Grosch ◽  
A. Schallamach

Abstract Evidence accumulates that tire forces on wet roads, particularly when the wheel is locked, are determined by the dry frictional properties of the rubber on the one hand and by hydrodynamic lubrication in the contact area on the other. The probable reason why they are so clearly separable is that water is a poor lubricant, tending to separate into globules and dry areas under relatively small pressures. Road surfaces and tire profiles are, therefore, designed to create easy drainage and high local contact pressures. The influence of road friction on vehicle control well below the critical conditions is becoming more clearly understood; but more Investigations are required here, in particular under dynamic conditions.


2019 ◽  
Author(s):  
Daniel Tang

Agent-based models are a powerful tool for studying the behaviour of complex systems that can be described in terms of multiple, interacting ``agents''. However, because of their inherently discrete and often highly non-linear nature, it is very difficult to reason about the relationship between the state of the model, on the one hand, and our observations of the real world on the other. In this paper we consider agents that have a discrete set of states that, at any instant, act with a probability that may depend on the environment or the state of other agents. Given this, we show how the mathematical apparatus of quantum field theory can be used to reason probabilistically about the state and dynamics the model, and describe an algorithm to update our belief in the state of the model in the light of new, real-world observations. Using a simple predator-prey model on a 2-dimensional spatial grid as an example, we demonstrate the assimilation of incomplete, noisy observations and show that this leads to an increase in the mutual information between the actual state of the observed system and the posterior distribution given the observations, when compared to a null model.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Charles Marks ◽  
Arash Jahangiri ◽  
Sahar Ghanipoor Machiani

Every year, over 50 million people are injured and 1.35 million die in traffic accidents. Risky driving behaviors are responsible for over half of all fatal vehicle accidents. Identifying risky driving behaviors within real-world driving (RWD) datasets is a promising avenue to reduce the mortality burden associated with these unsafe behaviors, but numerous technical hurdles must be overcome to do so. Herein, we describe the implementation of a multistage process for classifying unlabeled RWD data as potentially risky or not. In the first stage, data are reformatted and reduced in preparation for classification. In the second stage, subsets of the reformatted data are labeled as potentially risky (or not) using the Iterative-DBSCAN method. In the third stage, the labeled subsets are then used to fit random forest (RF) classification models—RF models were chosen after they were found to be performing better than logistic regression and artificial neural network models. In the final stage, the RF models are used predictively to label the remaining RWD data as potentially risky (or not). The implementation of each stage is described and analyzed for the classification of RWD data from vehicles on public roads in Ann Arbor, Michigan. Overall, we identified 22.7 million observations of potentially risky driving out of 268.2 million observations. This study provides a novel approach for identifying potentially risky driving behaviors within RWD datasets. As such, this study represents an important step in the implementation of protocols designed to address and prevent the harms associated with risky driving.


Author(s):  
Nazmul Islam ◽  
Dulal C. Ghosh

The electronegativity and the hardness are two different fundamental descriptors of atoms and molecules, and this chapter describes how the authors have logistically discovered the commonality between the heuristic and basic philosophical structures of their origin and also the manifestation in the real world. Also, the chapter demonstrates that the physical hardness and the chemical hardness with evolution of time have converged to one and the same general principle– the hardness. The authors also try to expose the physical basis and operational significance of another very important descriptor–the electronegativity. The chapter also explores whether the hardness equalization principle can be conceived analogous to the well established electronegativity equalization principle. The authors hypothesize that the electronegativity and the absolute hardness are two different appearances of the one and the same fundamental property of atoms, and the Hardness Equalization Principle can be equally conceived like the electronegativity equalization principle. To test this hypothesis, the authors have made several comparative studies by evaluating some well known chemico-physical descriptors of the real world, such as hetero nuclear bond distances, dipole charges, and dipole moments of molecules. The detailed comparative study suggests that the paradigm of the hardness equalization principle may be another law of nature like the established electronegativity equalization principle.


Just Words ◽  
2019 ◽  
pp. 124-155
Author(s):  
Mary Kate McGowan

This chapter uses the framework of covert exercitives to explore potential harms of actions involving certain types of pornography. The sorts of pornography of interest are clarified and the pornographic is shown to be context sensitive. This chapter focuses on the harms of subordination and silencing. Langton’s account of the subordinating force of pornography is critically assessed. An alternative model, relying on the covert exercitive, is presented and its advantages are illustrated using real world examples from the law. Various kinds of silencing are identified, the speech act of refusal is clarified, and both causal and constitutive connections between actions involving pornography, on the one hand, and the harms of subordination and silencing, on the other, are here discussed.


2020 ◽  
pp. 1-21
Author(s):  
Thierry Paul

By looking at three significant examples in analysis, geometry and dynamical systems, I propose the possibility of having two levels of realism in mathematics: the upper one, the one of entities; and a subordinated ground one, the one of objects. The upper level (entities) is more the one of ‘operations’, of mathematics in action, of the dynamics of mathematics, whereas the ground floor (objects) is more dedicated to culturally well-defined objects inherited from our perception of the physical or real world. I will show that the upper level is wider than the ground level, therefore foregrounding the possibility of having in mathematics entities without underlying objects. In the three examples treated in this article, this splitting of levels of reality is created directly by the willingness to preserve different symmetries, which take the form of identities or equivalences. Finally, it is proposed that mathematical Platonism is – in fine – a true branch of mathematics in order for mathematicians to avoid the temptation of falling into the Platonist alternative ‘everything is real’/‘nothing is real’.


2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Xinman Zhang ◽  
Kunlei Jing ◽  
Guokun Song

The security problems of online transactions by smartphones reveal extreme demand for reliable identity authentication systems. With a lower risk of forgery, richer texture, and more comfortable acquisition mode, compared with face, fingerprint, and iris, palmprint is rarely adopted for identity authentication. In this paper, we develop an effective and full-function palmprint authentication system regarding the application on an Android smartphone, which bridges the algorithmic study and application of palmprint authentication. In more detail, an overall system framework is designed with complete functions, including palmprint acquisition, key points location, ROI segmentation, feature extraction, and feature coding. Basically, we develop a palmprint authentication system having user-friendly interfaces and good compatibility with the Android smartphone. Particularly, on the one hand, to guarantee the effectiveness and efficiency of the system, we exploit the practical Log-Gabor filter for feature extraction and discuss the impact of filtering direction, downsampling ratio, and discriminative feature coding to propose an improved algorithm. On the other hand, after exploring the hardware components of the smartphone and the technical development of the Android system, we provide an open technology to extend the biometric methods to real-world applications. On the public PolyU databases, simulation results suggest that the improved algorithm outperforms the original one with a promising accuracy of 100% and a good speed of 0.041 seconds. In real-world authentication, the developed system achieves an accuracy of 98.40% and a speed of 0.051 seconds. All the results verify the accuracy and timeliness of the developed system.


2014 ◽  
Vol 10 (3) ◽  
pp. 226-244 ◽  
Author(s):  
Johannes Lorey

Purpose – The purpose of this study is to introduce several metrics that enable universal and fine-grained characterization of arbitrary Linked Data repositories. Publicly accessible SPARQL endpoints contain vast amounts of knowledge from a large variety of domains. However, oftentimes these endpoints are not configured to process specific workloads as efficiently as possible. Assisting users in leveraging SPARQL endpoints requires insight into functional and non-functional properties of these knowledge bases. Design/methodology/approach – This study presents comprehensive approaches for deriving these metrics. More specifically, the study utilizes concrete SPARQL queries to determine corresponding values. Furthermore, it validates and discusses the introduced metrics through extensive evaluation on real-world SPARQL endpoints. Findings – The evaluation determined that endpoints exhibit different characteristics. While it comes as no surprise that latency and throughput are influenced by the network infrastructure, the costs for join operations depend on a number of factors that are not obvious to a data consumer. Moreover, as the author discusses mean, median and upper quartile values, it was found both endpoints behaving consistently as well as repositories offering varying levels of performance. Originality/value – On the one hand, the contribution of the authors work lies in assisting data consumers in evaluation of the quality of service of publicly available SPARQL endpoints. On the other hand, the performance metrics introduced in this study can also be considered as additional input features for distributed query processing frameworks. Moreover, the author provides a universal means for discerning characteristics of different SPARQL endpoints without the need of (synthetic or real-world) query workloads.


Sign in / Sign up

Export Citation Format

Share Document