scholarly journals Coordinated Reasoning for Cross-Lingual Knowledge Graph Alignment

2020 ◽  
Vol 34 (05) ◽  
pp. 9354-9361
Author(s):  
Kun Xu ◽  
Linfeng Song ◽  
Yansong Feng ◽  
Yan Song ◽  
Dong Yu

Existing entity alignment methods mainly vary on the choices of encoding the knowledge graph, but they typically use the same decoding method, which independently chooses the local optimal match for each source entity. This decoding method may not only cause the “many-to-one” problem but also neglect the coordinated nature of this task, that is, each alignment decision may highly correlate to the other decisions. In this paper, we introduce two coordinated reasoning methods, i.e., the Easy-to-Hard decoding strategy and joint entity alignment algorithm. Specifically, the Easy-to-Hard strategy first retrieves the model-confident alignments from the predicted results and then incorporates them as additional knowledge to resolve the remaining model-uncertain alignments. To achieve this, we further propose an enhanced alignment model that is built on the current state-of-the-art baseline. In addition, to address the many-to-one problem, we propose to jointly predict entity alignments so that the one-to-one constraint can be naturally incorporated into the alignment prediction. Experimental results show that our model achieves the state-of-the-art performance and our reasoning methods can also significantly improve existing baselines.

Author(s):  
Fan Xiong ◽  
Jianliang Gao

Graph convolutional network (GCN) is a promising approach that has recently been used to resolve knowledge graph alignment. In this paper, we propose a new method to entity alignment for cross-lingual knowledge graph. In the method, we design a scheme of attribute embedding for GCN training. Furthermore, GCN model utilizes the attribute embedding and structure embedding to abstract graph features simultaneously. Our preliminary experiments show that the proposed method outperforms the state-of-the-art GCN-based method.


10.37236/35 ◽  
2013 ◽  
Vol 1000 ◽  
Author(s):  
Mirka Miller ◽  
Jozef Sirán

The degree/diameter problem is to determine the largest graphs or digraphs of given maximum degree and given diameter.General upper bounds - called Moore bounds - for the order of such graphs and digraphs are attainable only for certain special graphs and digraphs. Finding better (tighter) upper bounds for the maximum possible number of vertices, given the other two parameters, and thus attacking the degree/diameter problem 'from above', remains a largely unexplored area. Constructions producing large graphs and digraphs of given degree and diameter represent a way of attacking the degree/diameter problem 'from below'.This survey aims to give an overview of the current state-of-the-art of the degree/diameter problem. We focus mainly on the above two streams of research. However, we could not resist mentioning also results on various related problems. These include considering Moore-like bounds for special types of graphs and digraphs, such as vertex-transitive, Cayley, planar, bipartite, and many others, on the one hand, and related properties such as connectivity, regularity, and surface embeddability, on the other hand.


Imbizo ◽  
2017 ◽  
Vol 7 (1) ◽  
pp. 40-54
Author(s):  
Oyeh O. Otu

This article examines how female conditioning and sexual repression affect the woman’s sense of self, womanhood, identity and her place in society. It argues that the woman’s body is at the core of the many sites of gender struggles/ politics. Accordingly, the woman’s body must be decolonised for her to attain true emancipation. On the one hand, this study identifies the grave consequences of sexual repression, how it robs women of their freedom to choose whom to love or marry, the freedom to seek legal redress against sexual abuse and terror, and how it hinders their quest for self-determination. On the other hand, it underscores the need to give women sexual freedom that must be respected and enforced by law for the overall good of society.


Author(s):  
Alexander Diederich ◽  
Christophe Bastien ◽  
Karthikeyan Ekambaram ◽  
Alexis Wilson

The introduction of automated L5 driving technologies will revolutionise the design of vehicle interiors and seating configurations, improving occupant comfort and experience. It is foreseen that pre-crash emergency braking and swerving manoeuvres will affect occupant posture, which could lead to an interaction with a deploying airbag. This research addresses the urgent safety need of defining the occupant’s kinematics envelope during that pre-crash phase, considering rotated seat arrangements and different seatbelt configurations. The research used two different sets of volunteer tests experiencing L5 vehicle manoeuvres, based in the first instance on 22 50th percentile fit males wearing a lap-belt (OM4IS), while the other dataset is based on 87 volunteers with a BMI range of 19 to 67 kg/m2 wearing a 3-point belt (UMTRI). Unique biomechanics kinematics corridors were then defined, as a function of belt configuration and vehicle manoeuvre, to calibrate an Active Human Model (AHM) using a multi-objective optimisation coupled with a Correlation and Analysis (CORA) rating. The research improved the AHM omnidirectional kinematics response over current state of the art in a generic lap-belted environment. The AHM was then tested in a rotated seating arrangement under extreme braking, highlighting that maximum lateral and frontal motions are comparable, independent of the belt system, while the asymmetry of the 3-point belt increased the occupant’s motion towards the seatbelt buckle. It was observed that the frontal occupant kinematics decrease by 200 mm compared to a lap-belted configuration. This improved omnidirectional AHM is the first step towards designing safer future L5 vehicle interiors.


Database ◽  
2021 ◽  
Vol 2021 ◽  
Author(s):  
Yifan Shao ◽  
Haoru Li ◽  
Jinghang Gu ◽  
Longhua Qian ◽  
Guodong Zhou

Abstract Extraction of causal relations between biomedical entities in the form of Biological Expression Language (BEL) poses a new challenge to the community of biomedical text mining due to the complexity of BEL statements. We propose a simplified form of BEL statements [Simplified Biological Expression Language (SBEL)] to facilitate BEL extraction and employ BERT (Bidirectional Encoder Representation from Transformers) to improve the performance of causal relation extraction (RE). On the one hand, BEL statement extraction is transformed into the extraction of an intermediate form—SBEL statement, which is then further decomposed into two subtasks: entity RE and entity function detection. On the other hand, we use a powerful pretrained BERT model to both extract entity relations and detect entity functions, aiming to improve the performance of two subtasks. Entity relations and functions are then combined into SBEL statements and finally merged into BEL statements. Experimental results on the BioCreative-V Track 4 corpus demonstrate that our method achieves the state-of-the-art performance in BEL statement extraction with F1 scores of 54.8% in Stage 2 evaluation and of 30.1% in Stage 1 evaluation, respectively. Database URL: https://github.com/grapeff/SBEL_datasets


2000 ◽  
Vol 11 (3) ◽  
pp. 261-264 ◽  
Author(s):  
Tricia S. Clement ◽  
Thomas R. Zentall

We tested the hypothesis that pigeons could use a cognitively efficient coding strategy by training them on a conditional discrimination (delayed symbolic matching) in which one alternative was correct following the presentation of one sample (one-to-one), whereas the other alternative was correct following the presentation of any one of four other samples (many-to-one). When retention intervals of different durations were inserted between the offset of the sample and the onset of the choice stimuli, divergent retention functions were found. With increasing retention interval, matching accuracy on trials involving any of the many-to-one samples was increasingly better than matching accuracy on trials involving the one-to-one sample. Furthermore, following this test, pigeons treated a novel sample as if it had been one of the many-to-one samples. The data suggest that rather than learning each of the five sample-comparison associations independently, the pigeons developed a cognitively efficient single-code/default coding strategy.


1998 ◽  
Vol 08 (01) ◽  
pp. 21-66 ◽  
Author(s):  
W. M. P. VAN DER AALST

Workflow management promises a new solution to an age-old problem: controlling, monitoring, optimizing and supporting business processes. What is new about workflow management is the explicit representation of the business process logic which allows for computerized support. This paper discusses the use of Petri nets in the context of workflow management. Petri nets are an established tool for modeling and analyzing processes. On the one hand, Petri nets can be used as a design language for the specification of complex workflows. On the other hand, Petri net theory provides for powerful analysis techniques which can be used to verify the correctness of workflow procedures. This paper introduces workflow management as an application domain for Petri nets, presents state-of-the-art results with respect to the verification of workflows, and highlights some Petri-net-based workflow tools.


1927 ◽  
Vol 23 (8) ◽  
pp. 839-845
Author(s):  
V. P. Roshchin

The problem of glaucoma has, for many reasons, occupied and continues to occupy a prominent place in the ophthalmic press. It is enough to recall that 19% of all blind people owe their misfortune to glaucoma to understand why interest in this affliction has never faded among ophthalmologists. Furthermore, no ophthalmologist is quite sure that a certain method of treatment, even if the patient has timely applied for medical attention, can definitely prevent a sad outcome in every single case. This plus the absence of a unified and correct view of the essence of glaucoma keeps ophthalmologists in a constant state of flux, constantly striving to uncover the hidden springs of the disease process on the one hand, and to find a more radical means to combat it on the other.


2020 ◽  
Vol 20 (9&10) ◽  
pp. 747-765
Author(s):  
F. Orts ◽  
G. Ortega ◽  
E.M. E.M. Garzon

Despite the great interest that the scientific community has in quantum computing, the scarcity and high cost of resources prevent to advance in this field. Specifically, qubits are very expensive to build, causing the few available quantum computers are tremendously limited in their number of qubits and delaying their progress. This work presents new reversible circuits that optimize the necessary resources for the conversion of a sign binary number into two's complement of N digits. The benefits of our work are two: on the one hand, the proposed two's complement converters are fault tolerant circuits and also are more efficient in terms of resources (essentially, quantum cost, number of qubits, and T-count) than the described in the literature. On the other hand, valuable information about available converters and, what is more, quantum adders, is summarized in tables for interested researchers. The converters have been measured using robust metrics and have been compared with the state-of-the-art circuits. The code to build them in a real quantum computer is given.


Author(s):  
Ahlam Fuad ◽  
Amany bin Gahman ◽  
Rasha Alenezy ◽  
Wed Ateeq ◽  
Hend Al-Khalifa

Plural of paucity is one type of broken plural used in the classical Arabic. It is used when the number of people or objects ranges from three to 10. Based on our evaluation of four current state-of-the-art Arabic morphological analyzers, there is a lack of identification of broken plural words, specifically the plural of paucity. Therefore, this paper presents “[Formula: see text]” Qillah (paucity), a morphological extension that is built on top of other morphological analyzers and uses a hybrid rule-based and lexicon-based approach to enhance the identification of plural of paucity. Two versions of the Qillah were developed, one is based on FARASA morphological analyzer and the other is based on CALIMA Star analyzer, as these are some of the best-performing morphological analyzers. We designed two experiments to evaluate the effectiveness of our proposed solution based on a collection of 402 different Arabic words. The version based on CALIMA Star achieved a maximum accuracy of 93% in identifying the plural-of-paucity words compared to the baselines. It also achieved a maximum accuracy of 98% compared to the baselines in identifying the plurality of the words.


Sign in / Sign up

Export Citation Format

Share Document