Qillah: A Morphological Extension for Identifying Plural-of-Paucity Arabic Words

Author(s):  
Ahlam Fuad ◽  
Amany bin Gahman ◽  
Rasha Alenezy ◽  
Wed Ateeq ◽  
Hend Al-Khalifa

Plural of paucity is one type of broken plural used in the classical Arabic. It is used when the number of people or objects ranges from three to 10. Based on our evaluation of four current state-of-the-art Arabic morphological analyzers, there is a lack of identification of broken plural words, specifically the plural of paucity. Therefore, this paper presents “[Formula: see text]” Qillah (paucity), a morphological extension that is built on top of other morphological analyzers and uses a hybrid rule-based and lexicon-based approach to enhance the identification of plural of paucity. Two versions of the Qillah were developed, one is based on FARASA morphological analyzer and the other is based on CALIMA Star analyzer, as these are some of the best-performing morphological analyzers. We designed two experiments to evaluate the effectiveness of our proposed solution based on a collection of 402 different Arabic words. The version based on CALIMA Star achieved a maximum accuracy of 93% in identifying the plural-of-paucity words compared to the baselines. It also achieved a maximum accuracy of 98% compared to the baselines in identifying the plurality of the words.

Author(s):  
Alexander Diederich ◽  
Christophe Bastien ◽  
Karthikeyan Ekambaram ◽  
Alexis Wilson

The introduction of automated L5 driving technologies will revolutionise the design of vehicle interiors and seating configurations, improving occupant comfort and experience. It is foreseen that pre-crash emergency braking and swerving manoeuvres will affect occupant posture, which could lead to an interaction with a deploying airbag. This research addresses the urgent safety need of defining the occupant’s kinematics envelope during that pre-crash phase, considering rotated seat arrangements and different seatbelt configurations. The research used two different sets of volunteer tests experiencing L5 vehicle manoeuvres, based in the first instance on 22 50th percentile fit males wearing a lap-belt (OM4IS), while the other dataset is based on 87 volunteers with a BMI range of 19 to 67 kg/m2 wearing a 3-point belt (UMTRI). Unique biomechanics kinematics corridors were then defined, as a function of belt configuration and vehicle manoeuvre, to calibrate an Active Human Model (AHM) using a multi-objective optimisation coupled with a Correlation and Analysis (CORA) rating. The research improved the AHM omnidirectional kinematics response over current state of the art in a generic lap-belted environment. The AHM was then tested in a rotated seating arrangement under extreme braking, highlighting that maximum lateral and frontal motions are comparable, independent of the belt system, while the asymmetry of the 3-point belt increased the occupant’s motion towards the seatbelt buckle. It was observed that the frontal occupant kinematics decrease by 200 mm compared to a lap-belted configuration. This improved omnidirectional AHM is the first step towards designing safer future L5 vehicle interiors.


Author(s):  
Alexander Troussov ◽  
František Dařena ◽  
Jan Žižka ◽  
Denis Parra ◽  
Peter Brusilovsky

Spreading Activation is a family of graph-based algorithms widely used in areas such as information retrieval, epidemic models, and recommender systems. In this paper we introduce a novel Spreading Activation (SA) method that we call Vectorised Spreading Activation (VSA). VSA algorithms, like “traditional” SA algorithms, iteratively propagate the activation from the initially activated set of nodes to the other nodes in a network through outward links. The level of the node’s activation could be used as a centrality measurement in accordance with dynamic model-based view of centrality that focuses on the outcomes for nodes in a network where something is flowing from node to node across the edges. Representing the activation by vectors allows the use of the information about various dimensionalities of the flow and the dynamic of the flow. In this capacity, VSA algorithms can model multitude of complex multidimensional network flows. We present the results of numerical simulations on small synthetic social networks and multi­dimensional network models of folksonomies which show that the results of VSA propagation are more sensitive to the positions of the initial seed and to the community structure of the network than the results produced by traditional SA algorithms. We tentatively conclude that the VSA methods could be instrumental to develop scalable and computationally efficient algorithms which could achieve synergy between computation of centrality indexes with detection of community structures in networks. Based on our preliminary results and on improvements made over previous studies, we foresee advances and applications in the current state of the art of this family of algorithms and their applications to centrality measurement.


Author(s):  
Devesh Bhasin ◽  
Daniel A. McAdams

Abstract The development of multi-functional designs is one of the prime reasons to adopt bio-inspired design in engineering design. However, the development of multi-functional bio-inspired designs is mostly solution-driven, in the sense that an available multi-functional solution drives the search for a problem that can be solved by implementing the available solution. The solution-driven nature of the approach restricts the engineering designers to the use of the function combinations found in nature. On the other hand, a problem-driven approach to multi-functional designs allows the designers to form some combination of functions best suited for the problem at hand. However, few works exist in the literature that focus on the development of multi-functional bio-inspired solutions from a problem-driven perspective. In this work, we analyze the existing works that aid the designers in combining multiple biological strategies to develop multi-functional bio-inspired designs. The analysis is carried out by comparing and contrasting the existing frameworks that support multi-functional bio-inspired design generation. The criteria of comparison are derived from the steps involved in the unified problem-driven biomimetic approach. In addition, we qualitatively compare the multi-functional bio-inspired designs developed using existing frameworks to the multi-functional designs existing in biology. Our aim is to explore the capabilities and limitations of current methods to support the generation multi-functional bio-inspired designs.


2020 ◽  
Vol 34 (05) ◽  
pp. 9354-9361
Author(s):  
Kun Xu ◽  
Linfeng Song ◽  
Yansong Feng ◽  
Yan Song ◽  
Dong Yu

Existing entity alignment methods mainly vary on the choices of encoding the knowledge graph, but they typically use the same decoding method, which independently chooses the local optimal match for each source entity. This decoding method may not only cause the “many-to-one” problem but also neglect the coordinated nature of this task, that is, each alignment decision may highly correlate to the other decisions. In this paper, we introduce two coordinated reasoning methods, i.e., the Easy-to-Hard decoding strategy and joint entity alignment algorithm. Specifically, the Easy-to-Hard strategy first retrieves the model-confident alignments from the predicted results and then incorporates them as additional knowledge to resolve the remaining model-uncertain alignments. To achieve this, we further propose an enhanced alignment model that is built on the current state-of-the-art baseline. In addition, to address the many-to-one problem, we propose to jointly predict entity alignments so that the one-to-one constraint can be naturally incorporated into the alignment prediction. Experimental results show that our model achieves the state-of-the-art performance and our reasoning methods can also significantly improve existing baselines.


1997 ◽  
Author(s):  
J. W. Watts

Abstract Reservoir simulation is a mature technology, and nearly all major reservoir development decisions are based in some way on simulation results. Despite this maturity, the technology is changing rapidly. It is important for both providers and users of reservoir simulation software to understand where this change is leading. This paper takes a long-term view of reservoir simulation, describing where it has been and where it is now. It closes with a prediction of what the reservoir simulation state of the art will be in 2007 and speculation regarding certain aspects of simulation in 2017. Introduction Today, input from reservoir simulation is used in nearly all major reservoir development decisions. This has come about in part through technology improvements that make it easier to simulate reservoirs on one hand and possible to simulate them more realistically on the other; however, although reservoir simulation has come a long way from its beginnings in the 1950's, substantial further improvement is needed, and this is stimulating continual change in how simulation is performed. Given that this change is occurring, both developers and users of simulation have an interest in understanding where it is leading. Obviously, developers of new simulation capabilities need this understanding in order to keep their products relevant and competitive. However, people that use simulation also need this understanding; how else can they be confident that the organizations that provide their simulators are keeping up with advancing technology and moving in the right direction? In order to understand where we are going, it is helpful to know where we have been. Thus, this paper begins with a discussion of historical developments in reservoir simulation. Then it briefly describes the current state of the art in terms of how simulation is performed today. Finally, it closes with some general predictions.


Author(s):  
A. El-Shafei ◽  
N. Rieger

This paper provides an overview of the current available technologies for automated machinery condition evaluation and fault diagnosis within an overall plant asset management system. The paper presents a basic overview of an integrated plant asset management system, and focuses on the available technologies for automated diagnostics including statistical analysis of data, parametric model diagnosis, non-parametric model diagnosis (artificial neural networks), and rule-based diagnostics including expert systems and fuzzy logic. The current state-of-the-art and the expected realistic future developments are discussed.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7696
Author(s):  
Umair Yousaf ◽  
Ahmad Khan ◽  
Hazrat Ali ◽  
Fiaz Gul Khan ◽  
Zia ur Rehman ◽  
...  

License plate localization is the process of finding the license plate area and drawing a bounding box around it, while recognition is the process of identifying the text within the bounding box. The current state-of-the-art license plate localization and recognition approaches require license plates of standard size, style, fonts, and colors. Unfortunately, in Pakistan, license plates are non-standard and vary in terms of the characteristics mentioned above. This paper presents a deep-learning-based approach to localize and recognize Pakistani license plates with non-uniform and non-standardized sizes, fonts, and styles. We developed a new Pakistani license plate dataset (PLPD) to train and evaluate the proposed model. We conducted extensive experiments to compare the accuracy of the proposed approach with existing techniques. The results show that the proposed method outperformed the other methods to localize and recognize non-standard license plates.


2018 ◽  
Vol 21 (62) ◽  
pp. 75
Author(s):  
Gregor Behnke ◽  
Susanne Biundo

Linear temporal logic (LTL) provides expressive means to specify temporally extended goals as well as preferences.Recent research has focussed on compilation techniques, i.e., methods to alter the domain ensuring that every solution adheres to the temporally extended goals.This requires either new actions or an construction that is exponential in the size of the formula.A translation into boolean satisfiability (SAT) on the other hand requires neither.So far only one such encoding exists, which is based on the parallel $\exists$-step encoding for classical planning.We show a connection between it and recently developed compilation techniques for LTL, which may be exploited in the future.The major drawback of the encoding is that it is limited to LTL without the X operator.We show how to integrate X and describe two new encodings, which allow for more parallelism than the original encoding.An empirical evaluation shows that the new encodings outperform the current state-of-the-art encoding.


2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
Paul A. Zandbergen

Public health datasets increasingly use geographic identifiers such as an individual’s address. Geocoding these addresses often provides new insights since it becomes possible to examine spatial patterns and associations. Address information is typically considered confidential and is therefore not released or shared with others. Publishing maps with the locations of individuals, however, may also breach confidentiality since addresses and associated identities can be discovered through reverse geocoding. One commonly used technique to protect confidentiality when releasing individual-level geocoded data is geographic masking. This typically consists of applying a certain amount of random perturbation in a systematic manner to reduce the risk of reidentification. A number of geographic masking techniques have been developed as well as methods to quantity the risk of reidentification associated with a particular masking method. This paper presents a review of the current state-of-the-art in geographic masking, summarizing the various methods and their strengths and weaknesses. Despite recent progress, no universally accepted or endorsed geographic masking technique has emerged. Researchers on the other hand are publishing maps using geographic masking of confidential locations. Any researcher publishing such maps is advised to become familiar with the different masking techniques available and their associated reidentification risks.


Author(s):  
Ozgur Ekincioglu ◽  
M. Hulusi Ozkul ◽  
Silvia Patachia

The usage of polymers in different sectors has been increasing in recent decades, and even our current age may have been defined as polymer age. When concrete is considered polymers are also widely used to modify the properties of both mortar and concrete and the usage of polymers in concrete dates back to 1920’s. On the other hand, Macro-defect free (MDF) cements are one particular type of cement-polymer composites and developed and patented by scientists at Imperial College at the beginnings of 1980’s. MDF cements are produced by mixing cement (commonly calcium aluminate cement) with small amounts of polymer (usually polyvinyl alcohol acetate) and water. High shear, relatively low pressure (about 5 MPa) and moderate temperature (about 80-100 °C) are applied during the production of this material. MDF cements, although consist of more than 80% by weight of cement, show 20-30 times higher flexural strength comparing to ordinary Portland cements. However, MDF cements show a considerable reduction in strength when they are exposed to water even for a short time. Many studies have been conducted to solve the water sensitivity of MDF cements for over 30 years. In this study, production, basic properties and the current state of the art of MDF cements are explained, and the future research works are suggested.


Sign in / Sign up

Export Citation Format

Share Document