scholarly journals Similarity Measure of Graphs

Author(s):  
Amine Labriji ◽  
Salma Charkaoui ◽  
Issam Abdelbaki ◽  
Abdelouhaed Namir ◽  
El Houssine Labriji

<p class="0abstract">The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM), or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.</p>

Author(s):  
Fadi Badra

Analogical transfer consists in leveraging a measure of similarity between two situations to predict the amount of similarity between their outcomes. Acquiring a suitable similarity measure for analogical transfer may be difficult, especially when the data is sparse or when the domain knowledge is incomplete. To alleviate this problem, this paper presents a dataset complexity measure that can be used either to select an optimal similarity measure, or if the similarity measure is given, to perform analogical transfer: among the potential outcomes of a new situation, the most plausible is the one which minimizes the dataset complexity.


2021 ◽  
Vol 14 ◽  
pp. 194008292110147
Author(s):  
Dipto Sarkar ◽  
Colin A. Chapman

The term ‘smart forest’ is not yet common, but the proliferation of sensors, algorithms, and technocentric thinking in conservation, as in most other aspects of our lives, suggests we are at the brink of this evolution. While there has been some critical discussion about the value of using smart technology in conservation, a holistic discussion about the broader technological, social, and economic interactions involved with using big data, sensors, artificial intelligence, and global corporations is largely missing. Here, we explore the pitfalls that are useful to consider as forests are gradually converted to technological sites of data production for optimized biodiversity conservation and are consequently incorporated in the digital economy. We consider who are the enablers of the technologically enhanced forests and how the gradual operationalization of smart forests will impact the traditional stakeholders of conservation. We also look at the implications of carpeting forests with sensors and the type of questions that will be encouraged. To contextualize our arguments, we provide examples from our work in Kibale National Park, Uganda which hosts the one of the longest continuously running research field station in Africa.


2018 ◽  
Vol 10 (9) ◽  
pp. 3245 ◽  
Author(s):  
Tianxing Wu ◽  
Guilin Qi ◽  
Cheng Li ◽  
Meng Wang

With the continuous development of intelligent technologies, knowledge graph, the backbone of artificial intelligence, has attracted much attention from both academic and industrial communities due to its powerful capability of knowledge representation and reasoning. In recent years, knowledge graph has been widely applied in different kinds of applications, such as semantic search, question answering, knowledge management and so on. Techniques for building Chinese knowledge graphs are also developing rapidly and different Chinese knowledge graphs have been constructed to support various applications. Under the background of the “One Belt One Road (OBOR)” initiative, cooperating with the countries along OBOR on studying knowledge graph techniques and applications will greatly promote the development of artificial intelligence. At the same time, the accumulated experience of China in developing knowledge graphs is also a good reference to develop non-English knowledge graphs. In this paper, we aim to introduce the techniques of constructing Chinese knowledge graphs and their applications, as well as analyse the impact of knowledge graph on OBOR. We first describe the background of OBOR, and then introduce the concept and development history of knowledge graph and typical Chinese knowledge graphs. Afterwards, we present the details of techniques for constructing Chinese knowledge graphs, and demonstrate several applications of Chinese knowledge graphs. Finally, we list some examples to explain the potential impacts of knowledge graph on OBOR.


2014 ◽  
Vol 15 (1) ◽  
pp. 68-74 ◽  
Author(s):  
Doug Reside

In the first section of the submission guidelines for this esteemed journal, would-be authors are informed, “RBM: A Journal of Rare Books, Manuscripts, and Cultural Heritage uses a web-based, automated, submission system to track and review manuscripts. Manuscripts should be sent to the editor, […], through the web portal[…]” The multivalent uses of the word “manuscript” in this sentence reveal a good deal about the state of our field. This journal is dedicated to the study of manuscripts, and it is understood by most readers that the manuscripts being studied are of the “one-of-a-kind” variety (even rarer than the “rare . . .


Cancers ◽  
2021 ◽  
Vol 13 (19) ◽  
pp. 4740
Author(s):  
Fabiano Bini ◽  
Andrada Pica ◽  
Laura Azzimonti ◽  
Alessandro Giusti ◽  
Lorenzo Ruinelli ◽  
...  

Artificial intelligence (AI) uses mathematical algorithms to perform tasks that require human cognitive abilities. AI-based methodologies, e.g., machine learning and deep learning, as well as the recently developed research field of radiomics have noticeable potential to transform medical diagnostics. AI-based techniques applied to medical imaging allow to detect biological abnormalities, to diagnostic neoplasms or to predict the response to treatment. Nonetheless, the diagnostic accuracy of these methods is still a matter of debate. In this article, we first illustrate the key concepts and workflow characteristics of machine learning, deep learning and radiomics. We outline considerations regarding data input requirements, differences among these methodologies and their limitations. Subsequently, a concise overview is presented regarding the application of AI methods to the evaluation of thyroid images. We developed a critical discussion concerning limits and open challenges that should be addressed before the translation of AI techniques to the broad clinical use. Clarification of the pitfalls of AI-based techniques results crucial in order to ensure the optimal application for each patient.


2020 ◽  
Vol 6 (2) ◽  
pp. 54-71
Author(s):  
Raquel Borges Blázquez

Artificial intelligence has countless advantages in our lives. On the one hand, computer’s capacity to store and connect data is far superior to human capacity. On the other hand, its “intelligence” also involves deep ethical problems that the law must respond to. I say “intelligence” because nowadays machines are not intelligent. Machines only use the data that a human being has previously offered as true. The truth is relative and the data will have the same biases and prejudices as the human who programs the machine. In other words, machines will be racist, sexist and classist if their programmers are. Furthermore, we are facing a new problem: the difficulty to understand the algorithm of those who apply the law.This situation forces us to rethink the criminal process, including artificial intelligence and spinning very thinly indicating how, when, why and under what assumptions we can make use of artificial intelligence and, above all, who is going to program it. At the end of the day, as Silvia Barona indicates, perhaps the question should be: who is going to control global legal thinking?


Law and World ◽  
2021 ◽  
Vol 7 (5) ◽  
pp. 8-13

In the digital era, technological advances have brought innovative opportunities. Artificial intelligence is a real instrument to provide automatic routine tasks in different fields (healthcare, education, the justice system, foreign and security policies, etc.). AI is evolving very fast. More precisely, robots as re-programmable multi-purpose devices designed for the handling of materials and tools for the processing of parts or specialized devices utilizing varying programmed movements to complete a variety of tasks.1 Regardless of opportunities, artificial intelligence may pose some risks and challenges for us. Because of the nature of AI ethical and legal questions can be pondered especially in terms of protecting human rights. The power of artificial intelligence means using it more effectively in the process of analyzing big data than a human being. On the one hand, it causes loss of traditional jobs and, on the other hand, it promotes the creation of digital equivalents of workers with automatic routine task capabilities. “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights,” said Ursula von der Leyen, President of the European Commission.2 The EU has a clear vision of the development of the legal framework for AI. In the light of the above, the article aims to explore the legal aspects of artificial intelligence based on the European experience. Furthermore, it is essential in the context of Georgia’s European integration. Analyzing legal approaches of the EU will promote an approximation of the Georgian legislation to the EU standards in this field. Also, it will facilitate to define AI’s role in the effective digital transformation of public and private sectors in Georgia.


Author(s):  
Anne Atlan ◽  
Nathalie Udo

This study analyzes the natural and social factors influencing the emergence and publicization of the invasive status of a fast growing bush, gorse (Ulex europaeus), by comparison between countries on a global scale. We used documents collected on the web in a standardized way. The results show that in all the countries studied, there are several public statuses attributed to gorse. The invasive status is the one that is most shared. The other most frequently encountered status are those of noxious weed, and of economically useful. The invasive status is publicized in nearly all countries, including those where gorse is almost absent. We quantified the publicization of the invasive gorse status of gorse by an indicator with 5 levels, and then performed a multivariate analysis that combines natural and social explanatory variables. The results lead us to propose the concept of invasive niche: the set of natural and social parameters that allow a species to be considered invasive in a given socio-ecosystem.


Sign in / Sign up

Export Citation Format

Share Document