Occupant pre-crash kinematics in rotated seat arrangements

Author(s):  
Alexander Diederich ◽  
Christophe Bastien ◽  
Karthikeyan Ekambaram ◽  
Alexis Wilson

The introduction of automated L5 driving technologies will revolutionise the design of vehicle interiors and seating configurations, improving occupant comfort and experience. It is foreseen that pre-crash emergency braking and swerving manoeuvres will affect occupant posture, which could lead to an interaction with a deploying airbag. This research addresses the urgent safety need of defining the occupant’s kinematics envelope during that pre-crash phase, considering rotated seat arrangements and different seatbelt configurations. The research used two different sets of volunteer tests experiencing L5 vehicle manoeuvres, based in the first instance on 22 50th percentile fit males wearing a lap-belt (OM4IS), while the other dataset is based on 87 volunteers with a BMI range of 19 to 67 kg/m2 wearing a 3-point belt (UMTRI). Unique biomechanics kinematics corridors were then defined, as a function of belt configuration and vehicle manoeuvre, to calibrate an Active Human Model (AHM) using a multi-objective optimisation coupled with a Correlation and Analysis (CORA) rating. The research improved the AHM omnidirectional kinematics response over current state of the art in a generic lap-belted environment. The AHM was then tested in a rotated seating arrangement under extreme braking, highlighting that maximum lateral and frontal motions are comparable, independent of the belt system, while the asymmetry of the 3-point belt increased the occupant’s motion towards the seatbelt buckle. It was observed that the frontal occupant kinematics decrease by 200 mm compared to a lap-belted configuration. This improved omnidirectional AHM is the first step towards designing safer future L5 vehicle interiors.

Algorithms ◽  
2019 ◽  
Vol 12 (5) ◽  
pp. 99 ◽  
Author(s):  
Kleopatra Pirpinia ◽  
Peter A. N. Bosman ◽  
Jan-Jakob Sonke ◽  
Marcel van Herk ◽  
Tanja Alderliesten

Current state-of-the-art medical deformable image registration (DIR) methods optimize a weighted sum of key objectives of interest. Having a pre-determined weight combination that leads to high-quality results for any instance of a specific DIR problem (i.e., a class solution) would facilitate clinical application of DIR. However, such a combination can vary widely for each instance and is currently often manually determined. A multi-objective optimization approach for DIR removes the need for manual tuning, providing a set of high-quality trade-off solutions. Here, we investigate machine learning for a multi-objective class solution, i.e., not a single weight combination, but a set thereof, that, when used on any instance of a specific DIR problem, approximates such a set of trade-off solutions. To this end, we employed a multi-objective evolutionary algorithm to learn sets of weight combinations for three breast DIR problems of increasing difficulty: 10 prone-prone cases, 4 prone-supine cases with limited deformations and 6 prone-supine cases with larger deformations and image artefacts. Clinically-acceptable results were obtained for the first two problems. Therefore, for DIR problems with limited deformations, a multi-objective class solution can be machine learned and used to compute straightforwardly multiple high-quality DIR outcomes, potentially leading to more efficient use of DIR in clinical practice.


Author(s):  
Ahlam Fuad ◽  
Amany bin Gahman ◽  
Rasha Alenezy ◽  
Wed Ateeq ◽  
Hend Al-Khalifa

Plural of paucity is one type of broken plural used in the classical Arabic. It is used when the number of people or objects ranges from three to 10. Based on our evaluation of four current state-of-the-art Arabic morphological analyzers, there is a lack of identification of broken plural words, specifically the plural of paucity. Therefore, this paper presents “[Formula: see text]” Qillah (paucity), a morphological extension that is built on top of other morphological analyzers and uses a hybrid rule-based and lexicon-based approach to enhance the identification of plural of paucity. Two versions of the Qillah were developed, one is based on FARASA morphological analyzer and the other is based on CALIMA Star analyzer, as these are some of the best-performing morphological analyzers. We designed two experiments to evaluate the effectiveness of our proposed solution based on a collection of 402 different Arabic words. The version based on CALIMA Star achieved a maximum accuracy of 93% in identifying the plural-of-paucity words compared to the baselines. It also achieved a maximum accuracy of 98% compared to the baselines in identifying the plurality of the words.


Author(s):  
Alexander Troussov ◽  
František Dařena ◽  
Jan Žižka ◽  
Denis Parra ◽  
Peter Brusilovsky

Spreading Activation is a family of graph-based algorithms widely used in areas such as information retrieval, epidemic models, and recommender systems. In this paper we introduce a novel Spreading Activation (SA) method that we call Vectorised Spreading Activation (VSA). VSA algorithms, like “traditional” SA algorithms, iteratively propagate the activation from the initially activated set of nodes to the other nodes in a network through outward links. The level of the node’s activation could be used as a centrality measurement in accordance with dynamic model-based view of centrality that focuses on the outcomes for nodes in a network where something is flowing from node to node across the edges. Representing the activation by vectors allows the use of the information about various dimensionalities of the flow and the dynamic of the flow. In this capacity, VSA algorithms can model multitude of complex multidimensional network flows. We present the results of numerical simulations on small synthetic social networks and multi­dimensional network models of folksonomies which show that the results of VSA propagation are more sensitive to the positions of the initial seed and to the community structure of the network than the results produced by traditional SA algorithms. We tentatively conclude that the VSA methods could be instrumental to develop scalable and computationally efficient algorithms which could achieve synergy between computation of centrality indexes with detection of community structures in networks. Based on our preliminary results and on improvements made over previous studies, we foresee advances and applications in the current state of the art of this family of algorithms and their applications to centrality measurement.


Author(s):  
Devesh Bhasin ◽  
Daniel A. McAdams

Abstract The development of multi-functional designs is one of the prime reasons to adopt bio-inspired design in engineering design. However, the development of multi-functional bio-inspired designs is mostly solution-driven, in the sense that an available multi-functional solution drives the search for a problem that can be solved by implementing the available solution. The solution-driven nature of the approach restricts the engineering designers to the use of the function combinations found in nature. On the other hand, a problem-driven approach to multi-functional designs allows the designers to form some combination of functions best suited for the problem at hand. However, few works exist in the literature that focus on the development of multi-functional bio-inspired solutions from a problem-driven perspective. In this work, we analyze the existing works that aid the designers in combining multiple biological strategies to develop multi-functional bio-inspired designs. The analysis is carried out by comparing and contrasting the existing frameworks that support multi-functional bio-inspired design generation. The criteria of comparison are derived from the steps involved in the unified problem-driven biomimetic approach. In addition, we qualitatively compare the multi-functional bio-inspired designs developed using existing frameworks to the multi-functional designs existing in biology. Our aim is to explore the capabilities and limitations of current methods to support the generation multi-functional bio-inspired designs.


2020 ◽  
Vol 34 (05) ◽  
pp. 9354-9361
Author(s):  
Kun Xu ◽  
Linfeng Song ◽  
Yansong Feng ◽  
Yan Song ◽  
Dong Yu

Existing entity alignment methods mainly vary on the choices of encoding the knowledge graph, but they typically use the same decoding method, which independently chooses the local optimal match for each source entity. This decoding method may not only cause the “many-to-one” problem but also neglect the coordinated nature of this task, that is, each alignment decision may highly correlate to the other decisions. In this paper, we introduce two coordinated reasoning methods, i.e., the Easy-to-Hard decoding strategy and joint entity alignment algorithm. Specifically, the Easy-to-Hard strategy first retrieves the model-confident alignments from the predicted results and then incorporates them as additional knowledge to resolve the remaining model-uncertain alignments. To achieve this, we further propose an enhanced alignment model that is built on the current state-of-the-art baseline. In addition, to address the many-to-one problem, we propose to jointly predict entity alignments so that the one-to-one constraint can be naturally incorporated into the alignment prediction. Experimental results show that our model achieves the state-of-the-art performance and our reasoning methods can also significantly improve existing baselines.


1997 ◽  
Author(s):  
J. W. Watts

Abstract Reservoir simulation is a mature technology, and nearly all major reservoir development decisions are based in some way on simulation results. Despite this maturity, the technology is changing rapidly. It is important for both providers and users of reservoir simulation software to understand where this change is leading. This paper takes a long-term view of reservoir simulation, describing where it has been and where it is now. It closes with a prediction of what the reservoir simulation state of the art will be in 2007 and speculation regarding certain aspects of simulation in 2017. Introduction Today, input from reservoir simulation is used in nearly all major reservoir development decisions. This has come about in part through technology improvements that make it easier to simulate reservoirs on one hand and possible to simulate them more realistically on the other; however, although reservoir simulation has come a long way from its beginnings in the 1950's, substantial further improvement is needed, and this is stimulating continual change in how simulation is performed. Given that this change is occurring, both developers and users of simulation have an interest in understanding where it is leading. Obviously, developers of new simulation capabilities need this understanding in order to keep their products relevant and competitive. However, people that use simulation also need this understanding; how else can they be confident that the organizations that provide their simulators are keeping up with advancing technology and moving in the right direction? In order to understand where we are going, it is helpful to know where we have been. Thus, this paper begins with a discussion of historical developments in reservoir simulation. Then it briefly describes the current state of the art in terms of how simulation is performed today. Finally, it closes with some general predictions.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7696
Author(s):  
Umair Yousaf ◽  
Ahmad Khan ◽  
Hazrat Ali ◽  
Fiaz Gul Khan ◽  
Zia ur Rehman ◽  
...  

License plate localization is the process of finding the license plate area and drawing a bounding box around it, while recognition is the process of identifying the text within the bounding box. The current state-of-the-art license plate localization and recognition approaches require license plates of standard size, style, fonts, and colors. Unfortunately, in Pakistan, license plates are non-standard and vary in terms of the characteristics mentioned above. This paper presents a deep-learning-based approach to localize and recognize Pakistani license plates with non-uniform and non-standardized sizes, fonts, and styles. We developed a new Pakistani license plate dataset (PLPD) to train and evaluate the proposed model. We conducted extensive experiments to compare the accuracy of the proposed approach with existing techniques. The results show that the proposed method outperformed the other methods to localize and recognize non-standard license plates.


2015 ◽  
Vol 52 ◽  
pp. 399-443 ◽  
Author(s):  
Diederik Marijn Roijers ◽  
Shimon Whiteson ◽  
Frans A. Oliehoek

In this article, we propose new algorithms for multi-objective coordination graphs (MO-CoGs). Key to the efficiency of these algorithms is that they compute a convex coverage set (CCS) instead of a Pareto coverage set (PCS). Not only is a CCS a sufficient solution set for a large class of problems, it also has important characteristics that facilitate more efficient solutions. We propose two main algorithms for computing a CCS in MO-CoGs. Convex multi-objective variable elimination (CMOVE) computes a CCS by performing a series of agent eliminations, which can be seen as solving a series of local multi-objective subproblems. Variable elimination linear support (VELS) iteratively identifies the single weight vector, w, that can lead to the maximal possible improvement on a partial CCS and calls variable elimination to solve a scalarized instance of the problem for w. VELS is faster than CMOVE for small and medium numbers of objectives and can compute an ε-approximate CCS in a fraction of the runtime. In addition, we propose variants of these methods that employ AND/OR tree search instead of variable elimination to achieve memory efficiency. We analyze the runtime and space complexities of these methods, prove their correctness, and compare them empirically against a naive baseline and an existing PCS method, both in terms of memory-usage and runtime. Our results show that, by focusing on the CCS, these methods achieve much better scalability in the number of agents than the current state of the art.


2018 ◽  
Vol 21 (62) ◽  
pp. 75
Author(s):  
Gregor Behnke ◽  
Susanne Biundo

Linear temporal logic (LTL) provides expressive means to specify temporally extended goals as well as preferences.Recent research has focussed on compilation techniques, i.e., methods to alter the domain ensuring that every solution adheres to the temporally extended goals.This requires either new actions or an construction that is exponential in the size of the formula.A translation into boolean satisfiability (SAT) on the other hand requires neither.So far only one such encoding exists, which is based on the parallel $\exists$-step encoding for classical planning.We show a connection between it and recently developed compilation techniques for LTL, which may be exploited in the future.The major drawback of the encoding is that it is limited to LTL without the X operator.We show how to integrate X and describe two new encodings, which allow for more parallelism than the original encoding.An empirical evaluation shows that the new encodings outperform the current state-of-the-art encoding.


Sign in / Sign up

Export Citation Format

Share Document