scholarly journals Invariance from the Euclidean Geometer's Perspective

Perception ◽  
1994 ◽  
Vol 23 (5) ◽  
pp. 547-561 ◽  
Author(s):  
Luc J Van Gool ◽  
Theo Moons ◽  
Eric Pauwels ◽  
Johan Wagemans

It is remarkable how well the human visual system can cope with changing viewpoints when it comes to recognising shapes. The state of the art in machine vision is still quite remote from solving such tasks. Nevertheless, a surge in invariance-based research has led to the development of methods for solving recognition problems still considered hard until recently. A nonmathematical account explains the basic philosophy and trade-offs underlying this strand of research. The principles are explained for the relatively simple case of planar-object recognition under arbitrary viewpoints. Well-known Euclidean concepts form the basis of invariance in this case. Introducing constraints in addition to that of planarity may further simplify the invariants. On the other hand, there are problems for which no invariants exist.

Algorithms ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 167 ◽  
Author(s):  
Dan Malowany ◽  
Hugo Guterman

Computer vision is currently one of the most exciting and rapidly evolving fields of science, which affects numerous industries. Research and development breakthroughs, mainly in the field of convolutional neural networks (CNNs), opened the way to unprecedented sensitivity and precision in object detection and recognition tasks. Nevertheless, the findings in recent years on the sensitivity of neural networks to additive noise, light conditions, and to the wholeness of the training dataset, indicate that this technology still lacks the robustness needed for the autonomous robotic industry. In an attempt to bring computer vision algorithms closer to the capabilities of a human operator, the mechanisms of the human visual system was analyzed in this work. Recent studies show that the mechanisms behind the recognition process in the human brain include continuous generation of predictions based on prior knowledge of the world. These predictions enable rapid generation of contextual hypotheses that bias the outcome of the recognition process. This mechanism is especially advantageous in situations of uncertainty, when visual input is ambiguous. In addition, the human visual system continuously updates its knowledge about the world based on the gaps between its prediction and the visual feedback. CNNs are feed forward in nature and lack such top-down contextual attenuation mechanisms. As a result, although they process massive amounts of visual information during their operation, the information is not transformed into knowledge that can be used to generate contextual predictions and improve their performance. In this work, an architecture was designed that aims to integrate the concepts behind the top-down prediction and learning processes of the human visual system with the state-of-the-art bottom-up object recognition models, e.g., deep CNNs. The work focuses on two mechanisms of the human visual system: anticipation-driven perception and reinforcement-driven learning. Imitating these top-down mechanisms, together with the state-of-the-art bottom-up feed-forward algorithms, resulted in an accurate, robust, and continuously improving target recognition model.


1985 ◽  
Vol 107 (1) ◽  
pp. 6-22 ◽  
Author(s):  
William D. McNally ◽  
Peter M. Sockol

A review is given of current computational methods for analyzing flows in turbomachinery and other related internal propulsion components. The methods are divided primarily into two classes, inviscid and viscous. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, on the other hand, due to the state-of-the-art, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler approaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures.


2017 ◽  
Vol 3 (3) ◽  
pp. 372-393 ◽  
Author(s):  
GABRIELE FERRETTI

ABSTRACT:Are face-to-face perception and picture perception different perceptual phenomena? The question is controversial. On the one hand, philosophers have offered several solid arguments showing that, despite some resemblances, they are quite different perceptual phenomena and that pictures are special objects of perception. On the other hand, neuroscientists routinely use pictures in experimental settings as substitutes for normal objects, and this practice is successful in explaining how the human visual system works. But this seems to imply that face-to-face perception and picture perception are very similar, if not actually the same. How can we decide between these two opposite intuitions? Here I offer a regimentation of the notion of picture perception that can reconcile these two apparently conflicting ideas about pictures. It follows that philosophers and neuroscientists can maintain their respective stances without any theoretical conflict.


Revista M ◽  
2019 ◽  
Vol 15 ◽  
pp. 80-114
Author(s):  
Elena Perria

This paper is a general description of the state-of-the-art on the European and colonial building techniques found in wooden buildings that currently exist. The description of the use and diffusion of the wooden structures is made by a brief explanation of the relative structural systems, as well as the peculiarities of the elements that compose it, its operating principles, the points of weakness and the joints used. Thus, the English and German techniques of structural timber frameworks are described in detail. On the other hand, other structural techniques and their evolution are presented through a review of works and documents that describe them accurately.


Author(s):  
Albert V. Crewe

In accepting an invitation to speak in this symposium all the speakers today are exposing themselves to great hazards, and none more so than me. The capabilities of computers, their variety, the associated instrumentation, are all expanding so rapidly and changes are taking place with such a frequency that we are all inevitably out of date. The mere fact of having to prepare an abstract in March for a meeting in August presents hazards. Who knows what revolution in the state of the art will be announced in the meantime?My own project was started four or five years ago so that we may be considerably behind the times. As time went along we were able to take advantage of some of the new developments in technology but in many cases we were simply too far along to make adjustments. As a result, if one were to ask whether our project should serve as a model or whether we would do it the same way again, the answer would have to be “no.” On the other hand, some of the features are good and may serve as examples for future projects. The reader will have to decide.


2020 ◽  
Vol 22 (2) ◽  
pp. 31-50
Author(s):  
Ivan Tertuliano ◽  
Bruna Santana ◽  
Oliveira de ◽  
Afonso Machado ◽  
José Montiel

The number of athletes acting outside their countries of birth grows every year, as do those who also choose to defend another national team. Faced with this phenomenon, the objective of this study was to conceptualize the process of expatriation in two sports (Soccer and Volleyball) through an essay, in which qualitative research was used from the perspective of documentary analysis, pointing out the state of the art in this subject. The results indicate that the main reasons given by athletes to justify the expatriation process are economic and professional, such as salary and career opportunities improvement. On the other hand, the difficulties found by athletes in the process are closely related to the lack of family with the athlete after expatriation. With this, the literature points to the need for multidisciplinary preparation and monitoring for those who decide to leave their country, to avoid damages in sports performance.


2010 ◽  
Vol 20-23 ◽  
pp. 1136-1142
Author(s):  
Gui Feng ◽  
Yi Min Yang

The paper proposes an adaptive digital watermarking scheme based on chaos sequence and DCT transform. The scheme can choose the location for watermark inserting adaptively, and properly assign the embedding intensity in different inserting locations according to the characteristic of human visual system (HVS). On the other hand, this scheme combines chaos sequence and scrambling technique to improve the ability to withstand various attacks. The experimental results show that, the method can basically satisfy transparency and robustness requirement.


ETNOLINGUAL ◽  
2018 ◽  
Vol 2 (2) ◽  
Author(s):  
Hana Nurul Hasanah

Standard Indonesian is the high variety used primarily in writings and formal occasions. On the other hand, the commonly used variety by Indonesian is Colloquial Indonesian. In addition, Colloquial Jakarta Indonesian is the most popular and influential amongst other variety used in daily conversation. Despite the important use of intonation to enhance communication, a comprehensive research towards Colloquial Jakarta Indonesian has rarely been done. This paper attempts to present and discuss research results concerning Indonesian Intonation and illustrate the general picture of colloquial Indonesian intonation. Considering the state-of-the-art findings in the previous research, this paper concludes the possible future investigations of Colloquial Indonesian intonation.Keywords:Intonation, colloquial Indonesian, literature review


2017 ◽  
Vol 1 (1) ◽  
pp. 90
Author(s):  
Dian Septiandani ◽  
Abd. Shomad

Zakat is one of principal worship requiring every individual (<em>mukallaf</em>) with considerable property to spend some of the wealth for zakat under several conditions applied within. On the other hand, tax is an obligation assigned to taxpayers and should be deposited into the state based on policies applied, with no direct return as reward, for financing the national general expense. In their development, both zakat and tax had quite attention from Islamic economic thought. Nevertheless, we, at first, wanted to identify the principles of zakat and tax at the time of Rasulullah SAW. Therefore, this study referred to normative research. The primary data was collected through library/document research and the secondary one was collected through literature review by inventorying and collecting textbooks and other documents related to the studied issue.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-32
Author(s):  
Quang-huy Duong ◽  
Heri Ramampiaro ◽  
Kjetil Nørvåg ◽  
Thu-lan Dam

Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.


Sign in / Sign up

Export Citation Format

Share Document