scholarly journals Automated Reasoning in Modal and Description Logics via SAT Encoding: the Case Study of K(m)/ALC-Satisfiability

2009 ◽  
Vol 35 ◽  
pp. 343-389 ◽  
Author(s):  
R. Sebastiani ◽  
M. Vescovi

In the last two decades, modal and description logics have been applied to numerous areas of computer science, including knowledge representation, formal verification, database theory, distributed computing and, more recently, semantic web and ontologies. For this reason, the problem of automated reasoning in modal and description logics has been thoroughly investigated. In particular, many approaches have been proposed for efficiently handling the satisfiability of the core normal modal logic K(m), and of its notational variant, the description logic ALC. Although simple in structure, K(m)/ALC is computationally very hard to reason on, its satisfiability being PSPACE-complete. In this paper we start exploring the idea of performing automated reasoning tasks in modal and description logics by encoding them into SAT, so that to be handled by state-of-the-art SAT tools; as with most previous approaches, we begin our investigation from the satisfiability in K(m). We propose an efficient encoding, and we test it on an extensive set of benchmarks, comparing the approach with the main state-of-the-art tools available. Although the encoding is necessarily worst-case exponential, from our experiments we notice that, in practice, this approach can handle most or all the problems which are at the reach of the other approaches, with performances which are comparable with, or even better than, those of the current state-of-the-art tools.

2020 ◽  
Vol 176 (3-4) ◽  
pp. 349-384
Author(s):  
Domenico Cantone ◽  
Marianna Nicolosi-Asmundo ◽  
Daniele Francesco Santamaria

In this paper we consider the most common TBox and ABox reasoning services for the description logic 𝒟ℒ〈4LQSR,x〉(D) ( 𝒟 ℒ D 4,× , for short) and prove their decidability via a reduction to the satisfiability problem for the set-theoretic fragment 4LQSR. 𝒟 ℒ D 4,× is a very expressive description logic. It combines the high scalability and efficiency of rule languages such as the SemanticWeb Rule Language (SWRL) with the expressivity of description logics. In fact, among other features, it supports Boolean operations on concepts and roles, role constructs such as the product of concepts and role chains on the left-hand side of inclusion axioms, role properties such as transitivity, symmetry, reflexivity, and irreflexivity, and data types. We further provide a KE-tableau-based procedure that allows one to reason on the main TBox and ABox reasoning tasks for the description logic 𝒟 ℒ D 4,× . Our algorithm is based on a variant of the KE-tableau system for sets of universally quantified clauses, where the KE-elimination rule is generalized in such a way as to incorporate the γ-rule. The novel system, called KEγ-tableau, turns out to be an improvement of the system introduced in [1] and of standard first-order KE-tableaux [2]. Suitable benchmark test sets executed on C++ implementations of the three mentioned systems show that in several cases the performances of the KEγ-tableau-based reasoner are up to about 400% better than the ones of the other two systems.


2018 ◽  
Vol 108 (05) ◽  
pp. 319-324
Author(s):  
I. Bogdanov ◽  
A. Nuffer ◽  
A. Sauer

Der vorliegende Beitrag behandelt den Themenkomplex Ressourcen-effizienz und digitale Transformation im verarbeitenden Gewerbe sowie die dabei entstehenden Wechselwirkungen. Neben dem aktuellen Stand der Technik werden die im Rahmen einer aktuellen Studie durchgeführte Fallbeispielanalyse und die entwickelte Methodik zur Ermittlung der Ressourceneffizienzpotenziale vorgestellt. Diese Potenziale und die eingesetzten digitalen Maßnahmen sind zentrale Bausteine des vorliegenden Beitrags.   This article deals with the topic complex of resource efficiency and digital transformation in the manufacturing sector as well as the resulting interactions. In addition to the current state of the art and perspectives, the case study analysis carried out as part of a current study, as well as the developed method for establishing the resource efficiency potentials will be presented. The resultant potential and the digital measures are central components of this article.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Ramz L. Fraiha Lopes ◽  
Simone G. C. Fraiha ◽  
Vinicius D. Lima ◽  
Herminio S. Gomes ◽  
Gervásio P. S. Cavalcante

This study explores the use of a hybrid Autoregressive Integrated Moving Average (ARIMA) and Neural Network modelling for estimates of the electric field along vertical paths (buildings) close to Digital Television (DTV) transmitters. The work was carried out in Belém city, one of the most urbanized cities in the Brazilian Amazon and includes a case study of the application of this modelling within the subscenarios found in Belém. Its results were compared with the ITU recommendations P. 1546-5 and proved to be better in every subscenario analysed. In the worst case, the estimate of the model was approximately 65% better than that of the ITU. We also compared this modelling with a classic modelling technique: the Least Squares (LS) method. In most situations, the hybrid model achieved better results than the LS.


2019 ◽  
Vol 11 (7) ◽  
pp. 2963-2986 ◽  
Author(s):  
Nikos Dipsis ◽  
Kostas Stathis

Abstract The numerous applications of internet of things (IoT) and sensor networks combined with specialized devices used in each has led to a proliferation of domain specific middleware, which in turn creates interoperability issues between the corresponding architectures and the technologies used. But what if we wanted to use a machine learning algorithm to an IoT application so that it adapts intelligently to changes of the environment, or enable a software agent to enrich with artificial intelligence (AI) a smart home consisting of multiple and possibly incompatible technologies? In this work we answer these questions by studying a framework that explores how to simplify the incorporation of AI capabilities to existing sensor-actuator networks or IoT infrastructures making the services offered in such settings smarter. Towards this goal we present eVATAR+, a middleware that implements the interactions within the context of such integrations systematically and transparently from the developers’ perspective. It also provides a simple and easy to use interface for developers to use. eVATAR+ uses JAVA server technologies enhanced by mediator functionality providing interoperability, maintainability and heterogeneity support. We exemplify eVATAR+ with a concrete case study and we evaluate the relative merits of our approach by comparing our work with the current state of the art.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Hylke E. Beck ◽  
Seth Westra ◽  
Jackson Tan ◽  
Florian Pappenberger ◽  
George J. Huffman ◽  
...  

Abstract We introduce the Precipitation Probability DISTribution (PPDIST) dataset, a collection of global high-resolution (0.1°) observation-based climatologies (1979–2018) of the occurrence and peak intensity of precipitation (P) at daily and 3-hourly time-scales. The climatologies were produced using neural networks trained with daily P observations from 93,138 gauges and hourly P observations (resampled to 3-hourly) from 11,881 gauges worldwide. Mean validation coefficient of determination (R2) values ranged from 0.76 to 0.80 for the daily P occurrence indices, and from 0.44 to 0.84 for the daily peak P intensity indices. The neural networks performed significantly better than current state-of-the-art reanalysis (ERA5) and satellite (IMERG) products for all P indices. Using a 0.1 mm 3 h−1 threshold, P was estimated to occur 12.2%, 7.4%, and 14.3% of the time, on average, over the global, land, and ocean domains, respectively. The highest P intensities were found over parts of Central America, India, and Southeast Asia, along the western equatorial coast of Africa, and in the intertropical convergence zone. The PPDIST dataset is available via www.gloh2o.org/ppdist.


2008 ◽  
Vol 1076 ◽  
Author(s):  
Ryan Feeler ◽  
Jeremy Junghans ◽  
Edward Stephens ◽  
Greg Kemner ◽  
Fred Barlow ◽  
...  

ABSTRACTA new, patent-pending method of cooling high-power laser diode arrays has been developed which leverages advances in several areas of materials science and manufacturing. This method utilizes multi-layer ceramic microchannel coolers with small (100's of microns) integral water channels to cool the laser diode bar. This approach is similar to the current state-of-the-art method of cooling laser diode bars with copper microchannel coolers. However, the multi-layer ceramic coolers offer many advantages over the copper coolers, including reliability and manufacturing flexibility. The ceramic coolers do not require the use of deionized water as is mandatory of high-thermal-performance copper coolers.Experimental and modeled data is presented that demonstrates thermal performance equal to or better than copper microchannel coolers that are commercially available. Results of long-term, high-flow tests are also presented to demonstrate the resistance of the ceramic coolers to erosion. The materials selected for these coolers allow for the laser diode bars to be mounted using eutectic AuSn solder. This approach allows for maximum solder bond integrity over the life of the part.


Author(s):  
Franz Baader ◽  
Patrick Koopmann ◽  
Francesco Kriegel ◽  
Adrian Nuradiansyah

AbstractThe application of automated reasoning approaches to Description Logic (DL) ontologies may produce certain consequences that either are deemed to be wrong or should be hidden for privacy reasons. The question is then how to repair the ontology such that the unwanted consequences can no longer be deduced. An optimal repair is one where the least amount of other consequences is removed. Most of the previous approaches to ontology repair are of a syntactic nature in that they remove or weaken the axioms explicitly present in the ontology, and thus cannot achieve semantic optimality. In previous work, we have addressed the problem of computing optimal repairs of (quantified) ABoxes, where the unwanted consequences are described by concept assertions of the lightweight DL $$\mathcal {EL}$$ EL . In the present paper, we improve on the results achieved so far in two ways. First, we allow for the presence of terminological knowledge in the form of an $$\mathcal {EL}$$ EL TBox. This TBox is assumed to be static in the sense that it cannot be changed in the repair process. Second, the construction of optimal repairs described in our previous work is best case exponential. We introduce an optimized construction that is exponential only in the worst case. First experimental results indicate that this reduces the size of the computed optimal repairs considerably.


Author(s):  
Kevin R. Anderson ◽  
Wael Yassine

Abstract This paper presents modeling of the Puna Geothermal Venture as a case study in understanding how the technology of geothermal can by successfully implemented. The paper presents a review of the Puna Geothermal Venture specifications, followed by simulation results carried out using NREL SAM and RETSCREEN analysis tools in order to quantify the pertinent metrics associated with the geothermal powerplant by retrofitting its current capacity of 30 MW to 60 MW. The paper closes with a review of current state-of-the art H2S abatement strategies for geothermal power plants, and presents an outline of how these technologies can be implemented at the Puna Geothermal Venture.


Designs ◽  
2018 ◽  
Vol 2 (4) ◽  
pp. 37 ◽  
Author(s):  
Charul Chadha ◽  
Kathryn Crowe ◽  
Christina Carmen ◽  
Albert Patterson

This work explores an additive-manufacturing-enabled combination-of-function approach for design of modular products. AM technologies allow the design and manufacturing of nearly free-form geometry, which can be used to create more complex, multi-function or multi-feature parts. The approach presented here replaces sub-assemblies within a modular product or system with more complex consolidated parts that are designed and manufactured using AM technologies. This approach can increase the reliability of systems and products by reducing the number of interfaces, as well as allowing the optimization of the more complex parts during the design. The smaller part count and the ability of users to replace or upgrade the system or product parts on-demand should reduce user risk, life-cycle costs, and prevent obsolescence for the user of many systems. This study presents a detailed review on the current state-of-the-art in modular product design in order to demonstrate the place, need and usefulness of this AM-enabled method for systems and products that could benefit from it. A detailed case study is developed and presented to illustrate the concepts.


2020 ◽  
Vol 34 (10) ◽  
pp. 13833-13834
Author(s):  
Anish Kachinthaya ◽  
Yi Ding ◽  
Tobias Hollerer

In this paper, we look at how depth data can benefit existing object masking methods applied in occluded scenes. Masking the pixel locations of objects within scenes helps computers get a spatial awareness of where objects are within images. The current state-of-the-art algorithm for masking objects in images is Mask R-CNN, which builds on the Faster R-CNN network to mask object pixels rather than just detecting their bounding boxes. This paper examines the weaknesses Mask R-CNN has in masking people when they are occluded in a frame. It then looks at how depth data gathered from an RGB-D sensor can be used. We provide a case study to show how simply applying thresholding methods on the depth information can aid in distinguishing occluded persons. The intention of our research is to examine how features from depth data can benefit object pixel masking methods in an explainable manner, especially in complex scenes with multiple objects.


Sign in / Sign up

Export Citation Format

Share Document