Ontology Supported Automatic Generation of High-Quality Semantic Metadata

Author(s):  
Ümit Yoldas ◽  
Gábor Nagypál
Author(s):  
Yue Jiang ◽  
Zhouhui Lian ◽  
Yingmin Tang ◽  
Jianguo Xiao

Automatic generation of Chinese fonts that consist of large numbers of glyphs with complicated structures is now still a challenging and ongoing problem in areas of AI and Computer Graphics (CG). Traditional CG-based methods typically rely heavily on manual interventions, while recentlypopularized deep learning-based end-to-end approaches often obtain synthesis results with incorrect structures and/or serious artifacts. To address those problems, this paper proposes a structure-guided Chinese font generation system, SCFont, by using deep stacked networks. The key idea is to integrate the domain knowledge of Chinese characters with deep generative networks to ensure that high-quality glyphs with correct structures can be synthesized. More specifically, we first apply a CNN model to learn how to transfer the writing trajectories with separated strokes in the reference font style into those in the target style. Then, we train another CNN model learning how to recover shape details on the contour for synthesized writing trajectories. Experimental results validate the superiority of the proposed SCFont compared to the state of the art in both visual and quantitative assessments.


Author(s):  
Jie Pan ◽  
Jingwei Huang ◽  
Yunli Wang ◽  
Gengdong Cheng ◽  
Yong Zeng

Abstract Automatic generation of high-quality meshes is a base of CAD/CAE systems. The element extraction is a major mesh generation method for its capabilities to generate high-quality meshes around the domain boundary and to control local mesh densities. However, its widespread applications have been inhibited by the difficulties in generating satisfactory meshes in the interior of a domain or even in generating a complete mesh. The element extraction method's primary challenge is to define element extraction rules for achieving high-quality meshes in both the boundary and the interior of a geometric domain with complex shapes. This paper presents a self-learning element extraction system, FreeMesh-S, that can automatically acquire robust and high-quality element extraction rules. Two central components enable the FreeMesh-S: (1) three primitive structures of element extraction rules, which are constructed according to boundary patterns of any geometric boundary shapes; (2) a novel self-learning schema, which is used to automatically define and refine the relationships between the parameters included in the element extraction rules, by combining an Advantage Actor-Critic (A2C) reinforcement learning network and a Feedforward Neural Network (FNN). The A2C network learns the mesh generation process through random mesh element extraction actions using element quality as a reward signal and produces high-quality elements over time. The FNN takes the mesh generated from the A2C as samples to train itself for the fast generation of high-quality elements. FreeMesh-S is demonstrated by its application to two-dimensional quad mesh generation. The meshing performance of FreeMesh-S is compared with three existing popular approaches on ten pre-defined domain boundaries. The experimental results show that even with much less domain knowledge required to develop the algorithm, FreeMesh-S outperforms those three approaches in essential indices. FreeMesh-S significantly reduces the time and expertise needed to create high-quality mesh generation algorithms.


Author(s):  
Эльмира Шамильевна Кремлева ◽  
Александр Павлович Снегуренко ◽  
Светлана Владимировна Новикова ◽  
Наталья Львовна Валитова

В статье описаны методы принятия решений на основе алгоритмов интеллектуального обучения, для построения которых используются вербальные элементы. Такие алгоритмы и методы обычно работают в расчетах со строго количественными данными, однако, принимая во внимание человеческий способ восприятия информации в вербальной форме. Человек не принимает непосредственного участия в процессе построения модели, то есть ее структура не зависит от экспертных или иных человеческих мнений, однако качественная вербальная информация (например, элементы нормативных актов, документов, приказов и т. д.) встраивается в алгоритм в закодированной форме. Представлены вычислительные эксперименты. The article describes decision-making methods based on intelligent learning algorithms; for the construction of which verbal elements are used. Such algorithms and methods usually work in calculations with strictly quantitative data; however; taking into account the human way of perceiving information in verbal form. The person does not directly participate in the process of building the model; that is; its structure does not depend on expert or other human opinions; however; high-quality verbal information (for example; elements of regulations; documents; orders; etc.) is embedded in the algorithm in coded form. Computational experiments are presented.


10.29007/zbb8 ◽  
2018 ◽  
Author(s):  
Emanuele Di Rosa ◽  
Enrico Giunchiglia ◽  
Massimo Narizzano ◽  
Gabriele Palma ◽  
Alessandra Puddu

Software Testing is the most used technique for software verification in industry. In the case of safety critical software, the test set can be required to cover a high percentage (up to 100%) of the software code according to some metrics. Unfortunately, attaining such high percentages is not easy using standard automatic tools for tests generation, and manual generation by domain experts is often necessary, thereby significantly increasing the associated costs.In previous papers, we have shown how it is possible to automatize the test generation process of C programs via the bounded model checker CBMC. In particular, we have shown how it is possible to productively use CBMC for the automatic generation of test sets covering 100% of branches of 5 modules of ERTMS/ETCS, a safety critical industrial software by Ansaldo STS. Unfortunately, the test set we automatically generated, is of lower "quality" if compared to the test set manually generated by domain experts: Both test sets attained the desired 100% branch coverage, but the sizes of the automatically generated test sets are roughly twice the sizes of the corresponding manually generated ones. Indeed, the automatically generated test sets contain redundant tests, i.e. tests that do not contribute to reach the desired 100% branch coverage. These redundant tests are useless from the perspective of the branch coverage, are not easy to detect and then to eliminate a posteriori, and, if maintained, imply additional costs during the verification process.In this paper we present a new methodology for the automatic generation of "high quality" test sets guaranteeing full branch coverage. Given an initially empty test set T, the basic idea is to extend T with a test covering as many as possible of the branches which are not covered by T. This requires an analysis of the control flow graph of the program in order to first individuate a path p with the desired property, and then the run of a tool (CBMC in our case) able to return either a test causing the execution of p or that such a test does not exist (under the given assumptions). We have experimented the methodology on 31 modules of the Ansaldo STS ERTMS/ETCS software, thus greatly extending the benchmarking set. For 27 of the 31 modules we succeeded in our goal to automatically generate "high quality" test sets attaining full branch coverage: All the feasible branches are executed by at least one test and the sizes of our test sets are significantly smaller than the sizes of the test sets manually generated by domain experts (and thus are also significantly smaller than the test sets automatically generated with our previous methodology). However, for 4 modules, we have been unable to automatically generate test sets attaining full branch coverage: These modules contain complex functions falling out of CBMC capacity.Our analysis on 31 modules greatly extends our previous analysis based on 5 modules, confirming that automatic test generation tools based on CBMC can be productively used in industry for attaining full branch coverage. Further, the methodology presented in this paper leads to a further increase in the productivity by substantially reducing the number of generated tests and thus the costs of the testing phase.


2020 ◽  
pp. 1-27 ◽  
Author(s):  
Marc Schulder ◽  
Michael Wiegand ◽  
Josef Ruppenhofer

Abstract Alleviating pain is good and abandoning hope is bad. We instinctively understand how words like alleviate and abandon affect the polarity of a phrase, inverting or weakening it. When these words are content words, such as verbs, nouns, and adjectives, we refer to them as polarity shifters. Shifters are a frequent occurrence in human language and an important part of successfully modeling negation in sentiment analysis; yet research on negation modeling has focused almost exclusively on a small handful of closed-class negation words, such as not, no, and without. A major reason for this is that shifters are far more lexically diverse than negation words, but no resources exist to help identify them. We seek to remedy this lack of shifter resources by introducing a large lexicon of polarity shifters that covers English verbs, nouns, and adjectives. Creating the lexicon entirely by hand would be prohibitively expensive. Instead, we develop a bootstrapping approach that combines automatic classification with human verification to ensure the high quality of our lexicon while reducing annotation costs by over 70%. Our approach leverages a number of linguistic insights; while some features are based on textual patterns, others use semantic resources or syntactic relatedness. The created lexicon is evaluated both on a polarity shifter gold standard and on a polarity classification task.


Author(s):  
Vladimir Ivanov ◽  
Valery Solovyev

Concrete/abstract words are used in a growing number of psychological and neurophysiological research. For a few languages, large dictionaries have been created manually. This is a very time-consuming and costly process. To generate large high-quality dictionaries of concrete/abstract words automatically one needs extrapolating the expert assessments obtained on smaller samples. The research question that arises is how small such samples should be to do a good enough extrapolation. In this paper, we present a method for automatic ranking concreteness of words and propose an approach to significantly decrease amount of expert assessment. The method has been evaluated on a large test set for English. The quality of the constructed dictionaries is comparable to the expert ones. The correlation between predicted and expert ratings is higher comparing to the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document