classical semantic
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 4)

H-INDEX

2
(FIVE YEARS 0)

2022 ◽  
Author(s):  
Nitin Kumar

Abstract In order to solve the problems of poor region delineation and boundary artifacts in Indian style migration of images, an improved Variational Autoencoder (VAE) method for dress style migration is proposed. Firstly, the Yolo v3 model is used to quickly identify the dress localization of the input image, and then the classical semantic segmentation algorithm (FCN) is used to finely delineate the desired dress style migration region twice, and finally the trained VAE model is used to generate the migrated Indian style image using a decision support system. The results show that, compared with the traditional style migration model, the improved VAE style migration model can obtain finer synthetic images for dress style migration, and can adapt to different Indian traditional styles to meet the application requirements of dress style migration scenarios. We evaluated several deep learning based models and achieved BLEU value of 0.6 on average. The transformer-based model outperformed the other models, achieving a BLEU value of up to 0.72.


2021 ◽  
Vol 19 (3) ◽  
pp. 26-39
Author(s):  
D. E. Shabalina ◽  
K. S. Lanchukovskaya ◽  
T. V. Liakh ◽  
K. V. Chaika

The article is devoted to evaluation of the applicability of existing semantic segmentation algorithms for the “Duckietown” simulator. The article explores classical semantic segmentation algorithms as well as ones based on neural networks. We also examined machine learning frameworks, taking into account all the limitations of the “Duckietown” simulator. According to the research results, we selected neural network algorithms based on U-Net, SegNet, DeepLab-v3, FC-DenceNet and PSPNet networks to solve the segmentation problem in the “Duckietown” project. U-Net and SegNet have been tested on the “Duckietown” simulator.


2021 ◽  
Vol 70 ◽  
pp. 1557-1636
Author(s):  
Melisa G. Escañuela Gonzalez ◽  
Maximiliano C. D. Budán ◽  
Gerardo I. Simari ◽  
Guillermo R. Simari

An essential part of argumentation-based reasoning is to identify arguments in favor and against a statement or query, select the acceptable ones, and then determine whether or not the original statement should be accepted. We present here an abstract framework that considers two independent forms of argument interaction—support and conflict—and is able to represent distinctive information associated with these arguments. This information can enable additional actions such as: (i) a more in-depth analysis of the relations between the arguments; (ii) a representation of the user’s posture to help in focusing the argumentative process, optimizing the values of attributes associated with certain arguments; and (iii) an enhancement of the semantics taking advantage of the availability of richer information about argument acceptability. Thus, the classical semantic definitions are enhanced by analyzing a set of postulates they satisfy. Finally, a polynomial-time algorithm to perform the labeling process is introduced, in which the argument interactions are considered.


Linguistics ◽  
2019 ◽  
Author(s):  
Thanasis Georgakopoulos

A semantic map is a method for visually representing cross-linguistic regularity or universality in semantic structure. This method has proved attractive to typologists because it provides a convenient graphical display of the interrelationships between meanings or functions across languages, while (at the same time) differentiating what is universal from what is language-specific. The semantic map model was initially conceived to describe patterns of polysemy (or, more generally, of co-expression) in grammatical categories. However, several studies have shown that it can be fruitfully extended to lexical items and even constructions, suggesting that any type of meaning can be integrated in a map. The main idea of the method is that the spatial arrangement of the various meanings reflects their degree of (dis)similarity: the more similar the meanings, the closer they are placed—in accordance with the so-called connectivity hypothesis. Within the semantic map tradition, closeness has taken different forms depending on the approach adopted. In classical semantic maps (alternative terms: “first generation,” “implicational,” “connectivity” maps), the relation between meanings is represented as a line. This is the graph-based approach. In proximity maps (alternative terms: “similarity,” “second generation,” “statistical,” “probabilistic” maps), the distance between two meanings in space— represented as points—indicates the degree of their similarity. In this scale- or distance-based approach, the maps are constructed using multivariate statistical techniques, including the family of methods known as multidimensional scaling (MDS). Both classical and proximity maps have been widely used, although the latter have recently gained interest and popularity under the assumption that they can cope with large data more efficiently than classical semantic maps. However, classical semantic maps continue to be useful for studies aiming to discover universal semantic structures. Most importantly, classical maps can integrate information about directionality of change by drawing an arrow on the line connecting two meanings or functions. Beyond the choice between the two types of maps, one of the issues that has sparked debate and critical reflection among researchers is the universal relevance of semantic maps. The main question that these researchers address is whether semantic maps reflect the global geography of the human mind. Another much discussed issue is the identification of the factors that increase the accuracy of semantic maps in a way that allows for valid cross‐linguistic generalizations. Such factors include the choice of a representative language sample, the quality of the collected cross‐linguistic material, and the establishment of valid cross-linguistic comparators. Acknowledgments: The author wishes to thank one anonymous reviewer for their useful comments. For discussion of the material in this article, the author is grateful to Stéphane Polis.


2018 ◽  
Vol 28 (5) ◽  
pp. 1060-1072
Author(s):  
Bruno R Mendonça ◽  
Walter A Carnielli

Abstract We prove that the minimal Logic of Formal Inconsistency (LFI) $\mathsf{QmbC}$ (basic quantified logic of formal inconsistency) validates a weaker version of Fraïssé’s theorem (FT). LFIs are paraconsistent logics that relativize the Principle of Explosion only to consistent formulas. Now, despite the recent interest in LFIs, their model-theoretic properties are still not fully understood. Our aim in this paper is to investigate the situation. Our interest in FT has to do with its fruitfulness; the preservation of FT indicates that a number of other classical semantic properties can be also salvaged in LFIs. Further, given that FT depends on truth-functionality (a property that, in general, fails in LFIs), whether full FT holds for $\mathsf{QmbC}$ becomes a challenging question.


Author(s):  
Bruno Whittle ◽  
Bradley Armour-Garb ◽  
Bradley Armour-Garb

According to this chapter, approaches to truth and to the liar paradox appear to face a dilemma, as they must, it seems, appeal to some sort of hierarchy or contend that a putatively coherent concept is actually incoherent, either of which results in expressive limitations. The chapter proposes a new approach to the liar paradox that avoids such expressive limitations. This approach countenances classical semantic values while advocating a revision to how we think about compositional rules. The idea is that there are exceptions to the compositional rules associated with a language. To this end, the chapter adverts to theories that respect the “Chrysippus intuition,” which captures the idea that different tokens of the same type can have divergent semantic statuses. Such theories yield models of languages whose semantic values are classical but where the compositional rules associated with these languages have exceptions.


Author(s):  
Alexandra Galatescu

The proposed translation of natural language (NL) patterns to object and process modeling is seen as an alternative to the symbolic notations, textual languages or classical semantic networks, the main representation tools today. Its necessity is motivated by the universality, unifying abilities, natural extensibility, logic and reusability of NL. The translation relies on a formalized, stylized and graphical representation of NL, bridging NL to an integrated view on the object and process modeling. Only the morphological and syntactic knowledge in NL is subject to translation, but the proposed solution anticipates the semantic and logical interpretation of a model. A brief presentation and exemplification of NL patterns in consideration precede the translation.


2008 ◽  
Vol 34 (1) ◽  
pp. 39-46 ◽  
Author(s):  
Johan Van Der Auwera

Sign in / Sign up

Export Citation Format

Share Document