OntoClippy

Author(s):  
Nikolai Dahlem

In this article, the author describes OntoClippy, a tool-supported methodology for the user-friendly design and creation of ontologies. Existing ontology design methodologies and tools are targeted at experts and not suitable for users without a background in formal logic. Therefore, this research develops a methodology and a supporting tool to facilitate the acceptance of ontologies by a wider audience. In this article, the author positions the approach with respect to the current state of the art, formulates the basic principles of the methodology, presents its formal grounding, and describes its phases in detail. To demonstrate the viability of our approach, the author performs a comparative evaluation. The experiment is described, as well as real-world applications of the approach.

Author(s):  
Nikolai Dahlem

In this article, the author describes OntoClippy, a tool-supported methodology for the user-friendly design and creation of ontologies. Existing ontology design methodologies and tools are targeted at experts and not suitable for users without a background in formal logic. Therefore, this research develops a methodology and a supporting tool to facilitate the acceptance of ontologies by a wider audience. In this article, the author positions the approach with respect to the current state of the art, formulates the basic principles of the methodology, presents its formal grounding, and describes its phases in detail. To demonstrate the viability of our approach, the author performs a comparative evaluation. The experiment is described, as well as real-world applications of the approach.


Semantic Web ◽  
2021 ◽  
pp. 1-16
Author(s):  
Esko Ikkala ◽  
Eero Hyvönen ◽  
Heikki Rantala ◽  
Mikko Koho

This paper presents a new software framework, Sampo-UI, for developing user interfaces for semantic portals. The goal is to provide the end-user with multiple application perspectives to Linked Data knowledge graphs, and a two-step usage cycle based on faceted search combined with ready-to-use tooling for data analysis. For the software developer, the Sampo-UI framework makes it possible to create highly customizable, user-friendly, and responsive user interfaces using current state-of-the-art JavaScript libraries and data from SPARQL endpoints, while saving substantial coding effort. Sampo-UI is published on GitHub under the open MIT License and has been utilized in several internal and external projects. The framework has been used thus far in creating six published and five forth-coming portals, mostly related to the Cultural Heritage domain, that have had tens of thousands of end-users on the Web.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2021 ◽  
Vol 11 (1) ◽  
pp. 85-91
Author(s):  
Sh. Kh. Gantsev ◽  
M. V. Zabelin ◽  
K. Sh. Gantsev ◽  
A. A. Izmailov ◽  
Sh. R. Kzyrgalin

Peritoneal carcinomatosis (PC) is a global challenge of modern oncology representing the most unfavourable scenario in diverse-locality tumourisation. Despite certain attention by the oncological community, the management of PC patients is currently palliative, which weakly promotes research into the basic principles of this morbidity. This literature review attempts to comprehensively cover the PC problematic from a global perspective and presents a key evidence on the world schools of thought in this area. Briefly taking, peritoneal carcinomatosis is viewed today as a local process in the conventional implantation theory, which imposes a locoregional character on all current or emerging therapies, such as cytoreductive surgery and hyperthermic intraperitoneal chemotherapy. Their inadequate efficacy is largely due to pronounced gaps in our understanding of PC logistics and signalling. PSOGI is a key organisation for centralising the specialty effort in peritoneal carcinomatosis. Despite its global geography and approach to PC discussion, a multitude of scientific questions remain unanswered impeding the establishment of novel effective therapies. The seven countries that nurtured distinguished schools of thought in PC studies are the USA, UK, Japan, China, Italy, France and Germany. Taking peritoneal carcinomatosis in a global perspective, an insufficient attention to its problematic in Russia should be addressed. The founding and fostering of national PC institutions will benefit cancer patients and progress in oncological science.


2020 ◽  
Vol 68 ◽  
pp. 311-364
Author(s):  
Francesco Trovo ◽  
Stefano Paladino ◽  
Marcello Restelli ◽  
Nicola Gatti

Multi-Armed Bandit (MAB) techniques have been successfully applied to many classes of sequential decision problems in the past decades. However, non-stationary settings -- very common in real-world applications -- received little attention so far, and theoretical guarantees on the regret are known only for some frequentist algorithms. In this paper, we propose an algorithm, namely Sliding-Window Thompson Sampling (SW-TS), for nonstationary stochastic MAB settings. Our algorithm is based on Thompson Sampling and exploits a sliding-window approach to tackle, in a unified fashion, two different forms of non-stationarity studied separately so far: abruptly changing and smoothly changing. In the former, the reward distributions are constant during sequences of rounds, and their change may be arbitrary and happen at unknown rounds, while, in the latter, the reward distributions smoothly evolve over rounds according to unknown dynamics. Under mild assumptions, we provide regret upper bounds on the dynamic pseudo-regret of SW-TS for the abruptly changing environment, for the smoothly changing one, and for the setting in which both the non-stationarity forms are present. Furthermore, we empirically show that SW-TS dramatically outperforms state-of-the-art algorithms even when the forms of non-stationarity are taken separately, as previously studied in the literature.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 407 ◽  
Author(s):  
Dominik Weikert ◽  
Sebastian Mai ◽  
Sanaz Mostaghim

In this article, we present a new algorithm called Particle Swarm Contour Search (PSCS)—a Particle Swarm Optimisation inspired algorithm to find object contours in 2D environments. Currently, most contour-finding algorithms are based on image processing and require a complete overview of the search space in which the contour is to be found. However, for real-world applications this would require a complete knowledge about the search space, which may not be always feasible or possible. The proposed algorithm removes this requirement and is only based on the local information of the particles to accurately identify a contour. Particles search for the contour of an object and then traverse alongside using their known information about positions in- and out-side of the object. Our experiments show that the proposed PSCS algorithm can deliver comparable results as the state-of-the-art.


2008 ◽  
Vol 8 (5-6) ◽  
pp. 545-580 ◽  
Author(s):  
WOLFGANG FABER ◽  
GERALD PFEIFER ◽  
NICOLA LEONE ◽  
TINA DELL'ARMI ◽  
GIUSEPPE IELPA

AbstractDisjunctive logic programming (DLP) is a very expressive formalism. It allows for expressing every property of finite structures that is decidable in the complexity class ΣP2(=NPNP). Despite this high expressiveness, there are some simple properties, often arising in real-world applications, which cannot be encoded in a simple and natural manner. Especially properties that require the use of arithmetic operators (like sum, times, or count) on a set or multiset of elements, which satisfy some conditions, cannot be naturally expressed in classic DLP. To overcome this deficiency, we extend DLP by aggregate functions in a conservative way. In particular, we avoid the introduction of constructs with disputed semantics, by requiring aggregates to be stratified. We formally define the semantics of the extended language (called ), and illustrate how it can be profitably used for representing knowledge. Furthermore, we analyze the computational complexity of , showing that the addition of aggregates does not bring a higher cost in that respect. Finally, we provide an implementation of in DLV—a state-of-the-art DLP system—and report on experiments which confirm the usefulness of the proposed extension also for the efficiency of computation.


2021 ◽  
Author(s):  
Ezequiel Mikulan ◽  
Simone Russo ◽  
Flavia Maria Zauli ◽  
Piergiorgio d'Orio ◽  
Sara Parmigiani ◽  
...  

Deidentifying MRIs constitutes an imperative challenge, as it aims at precluding the possibility of re-identification of a research subject or patient, but at the same time it should preserve as much geometrical information as possible, in order to maximize data reusability and to facilitate interoperability. Although several deidentification methods exist, no comprehensive and comparative evaluation of deidentification performance has been carried out across them. Moreover, the possible ways these methods can compromise subsequent analysis has not been exhaustively tested. To tackle these issues, we developed AnonyMI, a novel MRI deidentification method, implemented as a user-friendly 3D Slicer plugin-in, which aims at providing a balance between identity protection and geometrical preservation. To test these features, we performed two series of analyses on which we compared AnonyMI to other two state-of-the-art methods, to evaluate, at the same time, how efficient they are at deidentifying MRIs and how much they affect subsequent analyses, with particular emphasis on source localization procedures. Our results show that all three methods significantly reduce the re-identification risk but AnonyMI provides the best geometrical conservation. Notably, it also offers several technical advantages such as a user-friendly interface, multiple input-output capabilities, the possibility of being tailored to specific needs, batch processing and efficient visualization for quality assurance.


2018 ◽  
Author(s):  
Aditi Kathpalia ◽  
Nithin Nagaraj

Causality testing methods are being widely used in various disciplines of science. Model-free methods for causality estimation are very useful as the underlying model generating the data is often unknown. However, existing model-free measures assume separability of cause and effect at the level of individual samples of measurements and unlike model-based methods do not perform any intervention to learn causal relationships. These measures can thus only capture causality which is by the associational occurrence of ‘cause’ and ‘effect’ between well separated samples. In real-world processes, often ‘cause’ and ‘effect’ are inherently inseparable or become inseparable in the acquired measurements. We propose a novel measure that uses an adaptive interventional scheme to capture causality which is not merely associational. The scheme is based on characterizing complexities associated with the dynamical evolution of processes on short windows of measurements. The formulated measure, Compression- Complexity Causality is rigorously tested on simulated and real datasets and its performance is compared with that of existing measures such as Granger Causality and Transfer Entropy. The proposed measure is robust to presence of noise, long-term memory, filtering and decimation, low temporal resolution (including aliasing), non-uniform sampling, finite length signals and presence of common driving variables. Our measure outperforms existing state-of-the-art measures, establishing itself as an effective tool for causality testing in real world applications.


Sign in / Sign up

Export Citation Format

Share Document