Latency-information theory and applications: Part II. On real-world knowledge aided radar

2008 ◽  
Author(s):  
Erlan H. Feria

2021 ◽  
pp. 1-11
Author(s):  
Rosy Pradhan ◽  
Mohammad Rafique Khan ◽  
Prabir Kumar Sethy ◽  
Santosh Kumar Majhi

The field of optimization science is proliferating that has made complex real-world problems easy to solve. Metaheuristics based algorithms inspired by nature or physical phenomena based methods have made its way in providing near-ideal (optimal) solutions to several complex real-world problems. Ant lion Optimization (ALO) has inspired by the hunting behavior of antlions for searching for food. Even with a unique idea, it has some limitations like a slower rate of convergence and sometimes confines itself into local solutions (optima). Therefore, to enhance its performance of classical ALO, quantum information theory is hybridized with classical ALO and named as QALO or quantum theory based ALO. It can escape from the limitations of basic ALO and also produces stability between processes of explorations followed by exploitation. CEC2017 benchmark set is adopted to estimate the performance of QALO compared with state-of-the-art algorithms. Experimental and statistical results demonstrate that the proposed method is superior to the original ALO. The proposed QALO extends further to solve the model order reduction (MOR) problem. The QALO based MOR method performs preferably better than other compared techniques. The results from the simulation study illustrate that the proposed method effectively utilized for global optimization and model order reduction.



2019 ◽  
Author(s):  
Miguel Equihua Zamora ◽  
Mariana Espinosa ◽  
Carlos Gershenson ◽  
Oliver López-Corona ◽  
Mariana Munguia ◽  
...  

We review the concept of ecosystem resilience in its relation to ecosystem integrity from an information theory approach. We summarize the literature on the subject identifying three main narratives: ecosystem properties that enable them to be more resilient; ecosystem response to perturbations; and complexity. We also include original ideas with theoretical and quantitative developments with application examples. The main contribution is a new way to rethink resilience, that is mathematically formal and easy to evaluate heuristically in real-world applications: ecosystem antifragility. An ecosystem is antifragile if it benefits from environmental variability. Antifragility therefore goes beyond robustness or resilience because while resilient/robust systems are merely perturbation-resistant, antifragile structures not only withstand stress but also benefit from it.



2021 ◽  
Vol 13 (2) ◽  
pp. 62-84
Author(s):  
Boudjemaa Boudaa ◽  
Djamila Figuir ◽  
Slimane Hammoudi ◽  
Sidi mohamed Benslimane

Collaborative and content-based recommender systems are widely employed in several activity domains helping users in finding relevant products and services (i.e., items). However, with the increasing features of items, the users are getting more demanding in their requirements, and these recommender systems are becoming not able to be efficient for this purpose. Built on knowledge bases about users and items, constraint-based recommender systems (CBRSs) come to meet the complex user requirements. Nevertheless, this kind of recommender systems witnesses a rarity in research and remains underutilised, essentially due to difficulties in knowledge acquisition and/or in their software engineering. This paper details a generic software architecture for the CBRSs development. Accordingly, a prototype mobile application called DATAtourist has been realized using DATAtourisme ontology as a recent real-world knowledge source in tourism. The DATAtourist evaluation under varied usage scenarios has demonstrated its usability and reliability to recommend personalized touristic points of interest.



Hard Reading ◽  
2016 ◽  
pp. 3-5
Author(s):  
Tom Shippey

This chapter argues that science fiction is hard reading because it requires the reader to process information at a level additional to that required for the reading of all fiction. The vital feature which distinguished the genre is the presence of the novum, a discrete item of information which the reader recognises as not present in the real world. Such items need first to be recognised and then collated to create an alternative vision of reality, the whole process having been described by the critic Darko Suvin as cognitive dissonance: the dissonance demanding recognition, the collation adding the cognitive element. Science fiction is a high information literature, information being used here in the technical sense of information theory.



Author(s):  
Susan Schneider

How can we determine if AI is conscious? The chapter begins by illustrating that there are potentially very serious real-world costs to getting facts about AI consciousness wrong. It then proposes a provisional framework for investigating artificial consciousness that involves several tests or markers. One test is the AI Consciousness Test, which challenges an AI with a series of increasingly demanding natural-language interactions. Another test is based on the Integrated Information Theory, developed by Giulio Tononi and others, and considers whether a machine has a high level of “integrated information.” A third test is a Chip Test, where speculatively an individual’s brain is gradually replaced with durable microchips. If this individual being tested continues to report having phenomenal consciousness, the chapter argues that this could be a reason to believe that some machines could have consciousness.



Author(s):  
Gary Smith

Humans have invaluable real-world knowledge because we have accumulated a lifetime of experiences that help us recognize, understand, and anticipate. Computers do not have real-world experiences to guide them, so they must rely on statistical patterns in their digital data base—which may be helpful, but is certainly fallible. We use emotions as well as logic to construct concepts that help us understand what we see and hear. When we see a dog, we may visualize other dogs, think about the similarities and differences between dogs and cats, or expect the dog to chase after a cat we see nearby. We may remember a childhood pet or recall past encounters with dogs. Remembering that dogs are friendly and loyal, we might smile and want to pet the dog or throw a stick for the dog to fetch. Remembering once being scared by an aggressive dog, we might pull back to a safe distance. A computer does none of this. For a computer, there is no meaningful difference between dog, tiger, and XyB3c, other than the fact that they use different symbols. A computer can count the number of times the word dog is used in a story and retrieve facts about dogs (such as how many legs they have), but computers do not understand words the way humans do, and will not respond to the word dog the way humans do. The lack of real world knowledge is often revealed in software that attempts to interpret words and images. Language translation software programs are designed to convert sentences written or spoken in one language into equivalent sentences in another language. In the 1950s, a Georgetown–IBM team demonstrated the machine translation of 60 sentences from Russian to English using a 250-word vocabulary and six grammatical rules. The lead scientist predicted that, with a larger vocabulary and more rules, translation programs would be perfected in three to five years. Little did he know! He had far too much faith in computers. It has now been more than 60 years and, while translation software is impressive, it is far from perfect. The stumbling blocks are instructive. Humans translate passages by thinking about the content—what the author means—and then expressing that content in another language.



Author(s):  
Koji Kamei ◽  
Yutaka Yanagisawa ◽  
Takuya Maekawa ◽  
Yasue Kishino ◽  
Yasushi Sakurai ◽  
...  

The construction of real-world knowledge is required if we are to understand real-world events that occur in a networked sensor environment. Since it is difficult to select suitable ‘events’ for recognition in a sensor environment a priori, we propose an incremental model for constructing real-world knowledge. Labeling is the central plank of the proposed model because the model simultaneously improves both the ontology of real-world events and the implementation of a sensor system based on a manually labeled event corpus. A labeling tool is developed in accordance with the model and is evaluated in a practical labeling experiment.



1992 ◽  
Vol 3 (1) ◽  
pp. 1-21 ◽  
Author(s):  
Veda C. Storey
Keyword(s):  


Author(s):  
Bayu Distiawan Trisedya ◽  
Jianzhong Qi ◽  
Rui Zhang

The task of entity alignment between knowledge graphs aims to find entities in two knowledge graphs that represent the same real-world entity. Recently, embedding-based models are proposed for this task. Such models are built on top of a knowledge graph embedding model that learns entity embeddings to capture the semantic similarity between entities in the same knowledge graph. We propose to learn embeddings that can capture the similarity between entities in different knowledge graphs. Our proposed model helps align entities from different knowledge graphs, and hence enables the integration of multiple knowledge graphs. Our model exploits large numbers of attribute triples existing in the knowledge graphs and generates attribute character embeddings. The attribute character embedding shifts the entity embeddings from two knowledge graphs into the same space by computing the similarity between entities based on their attributes. We use a transitivity rule to further enrich the number of attributes of an entity to enhance the attribute character embedding. Experiments using real-world knowledge bases show that our proposed model achieves consistent improvements over the baseline models by over 50% in terms of hits@1 on the entity alignment task.



Sign in / Sign up

Export Citation Format

Share Document