A Layered Architecture for a Fuzzy Semantic Approach for Satellite Image Analysis

2015 ◽  
Vol 6 (2) ◽  
pp. 31-56 ◽  
Author(s):  
Cecilia Zanni-Merk ◽  
Stella Marc-Zwecker ◽  
Cédric Wemmert ◽  
François de Bertrand de Beuvron

The extended use of high and very high spatial resolution imagery inherently demands the adoption of classification methods capable of capturing the underlying semantic. Object-oriented classification methods are currently considered as the most appropriate alternative, due to the incorporation of contextual information and domain knowledge into the analysis. Integrating knowledge initially requires a detailed process of acquisition and later the achievement of a formal representation. Ontologies constitute a very suitable approach to address both knowledge formalization and exploitation. A novel semi-automatic fuzzy semantic approach focused on the extraction and classification of urban objects is hereby introduced. The use of a four-layered architecture allows the separation of concerns among knowledge, rules, experience and meta-knowledge. Knowledge represents the fundamental layer with which the other layers interact. Rules are meant to derive conclusions and make assertions based on knowledge. The experience layer supports the classification process in case of failure when attempting to identify an object, by applying specific expert rules to infer unusual membership. Finally, the meta-knowledge layer contains knowledge about the use of the other layers.

2021 ◽  
pp. 103546
Author(s):  
Cristóbal Barba-González ◽  
Antonio J. Nebro ◽  
José García-Nieto ◽  
María del Mar Roldán-García ◽  
Ismael Navas-Delgado ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maiki Higa ◽  
Shinya Tanahara ◽  
Yoshitaka Adachi ◽  
Natsumi Ishiki ◽  
Shin Nakama ◽  
...  

AbstractIn this report, we propose a deep learning technique for high-accuracy estimation of the intensity class of a typhoon from a single satellite image, by incorporating meteorological domain knowledge. By using the Visual Geometric Group’s model, VGG-16, with images preprocessed with fisheye distortion, which enhances a typhoon’s eye, eyewall, and cloud distribution, we achieved much higher classification accuracy than that of a previous study, even with sequential-split validation. Through comparison of t-distributed stochastic neighbor embedding (t-SNE) plots for the feature maps of VGG with the original satellite images, we also verified that the fisheye preprocessing facilitated cluster formation, suggesting that our model could successfully extract image features related to the typhoon intensity class. Moreover, gradient-weighted class activation mapping (Grad-CAM) was applied to highlight the eye and the cloud distributions surrounding the eye, which are important regions for intensity classification; the results suggest that our model qualitatively gained a viewpoint similar to that of domain experts. A series of analyses revealed that the data-driven approach using only deep learning has limitations, and the integration of domain knowledge could bring new breakthroughs.


2019 ◽  
Vol 56 (2) ◽  
pp. 60-76
Author(s):  
Axel Gelfert ◽  

Epistemologists of testimony have tended to construct highly stylized (so-called “null setting”) examples in support of their respective philosophical positions, the paradigmatic case being the casual request for directions from a random stranger. The present paper analyzes the use of such examples in the early controversy between reductionists and anti-reductionists about testimonial justification. The controversy concerned, on the one hand, the source of whatever epistemic justification our testimony-based beliefs might have, and, on the other hand, the phenomenology of testimonial acceptance and rejection. As it turns out, appeal to “null setting” cases did not resolve, but instead deepened, the theoretical disputes between reductionists and anti-reductionists. This, it is suggested, is because interpreters ‘fill in’ missing details in ways that reflect their own peculiarities in perspective, experience, upbringing, and philosophical outlook. In response, two remedial strategies have been pursued in recent years: First, we could invert the usual strategy and turn to formal contexts, rather than informal settings, as the paradigmatic scenarios for any prospective epistemology of testimony. Second, instead of “null setting” scenarios, we can focus on richly described cases that either include, or are embedded into, sufficient contextual information to allow for educated judgments concerning the reliability and trustworthiness of the testimony and testifiers involved. The prospects of both of these approaches are then discussed and evaluated.


2020 ◽  
Author(s):  
Harith Al-Sahaf ◽  
A Song ◽  
K Neshatian ◽  
Mengjie Zhang

Image classification is a complex but important task especially in the areas of machine vision and image analysis such as remote sensing and face recognition. One of the challenges in image classification is finding an optimal set of features for a particular task because the choice of features has direct impact on the classification performance. However the goodness of a feature is highly problem dependent and often domain knowledge is required. To address these issues we introduce a Genetic Programming (GP) based image classification method, Two-Tier GP, which directly operates on raw pixels rather than features. The first tier in a classifier is for automatically defining features based on raw image input, while the second tier makes decision. Compared to conventional feature based image classification methods, Two-Tier GP achieved better accuracies on a range of different tasks. Furthermore by using the features defined by the first tier of these Two-Tier GP classifiers, conventional classification methods obtained higher accuracies than classifying on manually designed features. Analysis on evolved Two-Tier image classifiers shows that there are genuine features captured in the programs and the mechanism of achieving high accuracy can be revealed. The Two-Tier GP method has clear advantages in image classification, such as high accuracy, good interpretability and the removal of explicit feature extraction process. © 2012 IEEE.


2010 ◽  
Vol 37 ◽  
pp. 247-277 ◽  
Author(s):  
S. Qu ◽  
J. Y. Chai

To tackle the vocabulary problem in conversational systems, previous work has applied unsupervised learning approaches on co-occurring speech and eye gaze during interaction to automatically acquire new words. Although these approaches have shown promise, several issues related to human language behavior and human-machine conversation have not been addressed. First, psycholinguistic studies have shown certain temporal regularities between human eye movement and language production. While these regularities can potentially guide the acquisition process, they have not been incorporated in the previous unsupervised approaches. Second, conversational systems generally have an existing knowledge base about the domain and vocabulary. While the existing knowledge can potentially help bootstrap and constrain the acquired new words, it has not been incorporated in the previous models. Third, eye gaze could serve different functions in human-machine conversation. Some gaze streams may not be closely coupled with speech stream, and thus are potentially detrimental to word acquisition. Automated recognition of closely-coupled speech-gaze streams based on conversation context is important. To address these issues, we developed new approaches that incorporate user language behavior, domain knowledge, and conversation context in word acquisition. We evaluated these approaches in the context of situated dialogue in a virtual world. Our experimental results have shown that incorporating the above three types of contextual information significantly improves word acquisition performance.


Author(s):  
Yunpeng Li ◽  
Utpal Roy ◽  
Y. Tina Lee ◽  
Sudarsan Rachuri

Rule-based expert systems such as CLIPS (C Language Integrated Production System) are 1) based on inductive (if-then) rules to elicit domain knowledge and 2) designed to reason new knowledge based on existing knowledge and given inputs. Recently, data mining techniques have been advocated for discovering knowledge from massive historical or real-time sensor data. Combining top-down expert-driven rule models with bottom-up data-driven prediction models facilitates enrichment and improvement of the predefined knowledge in an expert system with data-driven insights. However, combining is possible only if there is a common and formal representation of these models so that they are capable of being exchanged, reused, and orchestrated among different authoring tools. This paper investigates the open standard PMML (Predictive Model Mockup Language) in integrating rule-based expert systems with data analytics tools, so that a decision maker would have access to powerful tools in dealing with both reasoning-intensive tasks and data-intensive tasks. We present a process planning use case in the manufacturing domain, which is originally implemented as a CLIPS-based expert system. Different paradigms in interpreting expert system facts and rules as PMML models (and vice versa), as well as challenges in representing and composing these models, have been explored. They will be discussed in detail.


Author(s):  
Marwa Manaa ◽  
Thouraya Sakouhi ◽  
Jalel Akaichi

Mobility data became an important paradigm for computing performed in various areas. Mobility data is considered as a core revealing the trace of mobile objects displacements. While each area presents a different optic of trajectory, they aim to support mobility data with domain knowledge. Semantic annotations may offer a common model for trajectories. Ontology design patterns seem to be promising solutions to define such trajectory related pattern. They appear more suitable for the annotation of multiperspective data than the only use of ontologies. The trajectory ontology design pattern will be used as a semantic layer for trajectory data warehouses for the sake of analyzing instantaneous behaviors conducted by mobile entities. In this chapter, the authors propose a semantic approach for the semantic modeling of trajectory and trajectory data warehouses based on a trajectory ontology design pattern. They validate the proposal through real case studies dealing with behavior analysis and animal tracking case studies.


Author(s):  
Sudhir K. Routray ◽  
Sarath Anand

Narrowband internet of things (NBIoT) is a leaner and thinner version of the IoT which needs much less resources than the other forms of the IoTs. Therefore, it is considered as a low power wide area network (LPWAN) technology. It can connect with a large number of devices with a very small amount of power and bandwidth. It has potential to connect almost all the considerable objects with the internet. Thus, it is a very powerful technology to establish internet of everything (IoE), a framework consisting of data, processes, sensing, and follow up actions for an intelligent environment. In this chapter, the authors present the IoE friendly architecture of NBIoT, its LPWAN features, principles, and its common applications in different sectors to show its versatility toward IoE. They show the layered architecture of a typical NBIoT and the main protocols used in the narrowband scenarios. They show the general applications of NBIoT for IoE and how it can provide services with limited bandwidth and power. With all these wonderful features, NBIoT is certainly an attractive technology for IoE which can provide the accelerated innovation opportunities.


Sign in / Sign up

Export Citation Format

Share Document