semantic dependencies
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 23)

H-INDEX

7
(FIVE YEARS 1)

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-30
Author(s):  
Alan Jeffrey ◽  
James Riely ◽  
Mark Batty ◽  
Simon Cooksey ◽  
Ilya Kaysin ◽  
...  

Program logics and semantics tell a pleasant story about sequential composition: when executing (S1;S2), we first execute S1 then S2. To improve performance, however, processors execute instructions out of order, and compilers reorder programs even more dramatically. By design, single-threaded systems cannot observe these reorderings; however, multiple-threaded systems can, making the story considerably less pleasant. A formal attempt to understand the resulting mess is known as a “relaxed memory model.” Prior models either fail to address sequential composition directly, or overly restrict processors and compilers, or permit nonsense thin-air behaviors which are unobservable in practice. To support sequential composition while targeting modern hardware, we enrich the standard event-based approach with preconditions and families of predicate transformers. When calculating the meaning of (S1; S2), the predicate transformer applied to the precondition of an event e from S2 is chosen based on the set of events in S1 upon which e depends. We apply this approach to two existing memory models.


Author(s):  
Vladimir V. Voronin ◽  
◽  
Aleksey V. Morozov ◽  

Today, almost everyone is faced with computer security problems in one or another way. Antivirus programs are used to control threats to the security of malicious software. Conventional methods for detecting malware are no longer effective enough; nowadays, neural networks and behavioral analysis technology have begun to be used for these purposes. Analyzing the behavior of programs is a difficult task, since there is no clear sequence of actions to accurately identify a program as malicious. In addition, such programs use measures to resist such detection, for example, noise masking the sequence of their work with meaningless actions. There is also the problem of uniquely identifying the class of malware due to the fact that malware can use similar methods, while being assigned to different classes. In this paper, it is proposed to use NLP methods, such as word embedding, and LDA in relation to the problems of analyzing malware API calls sequences in order to reveal the presence of semantic dependencies and assess the effectiveness of the application of these methods. The results obtained indicate the possibility of identifying the key features of malware behavior, which in the future will significantly improve the technology for detecting and identifying such programs.


2021 ◽  
Vol 11 (16) ◽  
pp. 7406
Author(s):  
Ricardo Kleinlein ◽  
Cristina Luna-Jiménez ◽  
David Arias-Cuadrado ◽  
Javier Ferreiros ◽  
Fernando Fernández-Martínez

Not every visual media production is equally retained in memory. Recent studies have shown that the elements of an image, as well as their mutual semantic dependencies, provide a strong clue as to whether a video clip will be recalled on a second viewing or not. We believe that short textual descriptions encapsulate most of these relationships among the elements of a video, and thus they represent a rich yet concise source of information to tackle the problem of media memorability prediction. In this paper, we deepen the study of short captions as a means to convey in natural language the visual semantics of a video. We propose to use vector embeddings from a pretrained SBERT topic detection model with no adaptation as input features to a linear regression model, showing that, from such a representation, simpler algorithms can outperform deep visual models. Our results suggest that text descriptions expressed in natural language might be effective in embodying the visual semantics required to model video memorability.


Radiotekhnika ◽  
2021 ◽  
pp. 129-137
Author(s):  
V. Zhyrnov ◽  
S. Solonskaya

In this paper a method to transform radar images of moving aerial objects with scintillating inter-period fluctuations, sometimes resulting to complete signal fading, using the Talbot effect is considered. These transformations are reduced to the establishment of a certain correspondence of the asymptotic equality of perception of visual images, arbitrarily changing in time and space, in the statement about the conditions of simple equality of perception of images of radar marks that have different frequencies of fluctuations. It is shown how this approach can be used to analyze radar data by transforming and smoothing scintillating signal fluctuations, invisible in the presence of interference, into visible symbolic images. First, to detect and recognize the aerial objects from the analysis of relations and functional (semantic) dependencies between attributes, second, to make a decision based on semantic components of symbolic radar images. The possibility of using such transformation to generate pulse-frequency code of fluctuations of the symbolic radar angel-echo images as an important characteristic for their recognition has been experimentally verified. Algorithms for generating symbolic images in asynchronous and synchronous pulse-frequency code are formulated. The symbolic image represented by such a code is considered as an additional feature for recognizing and filtering out natural interferences such as angel-echoes.


2021 ◽  
Vol 54 (5) ◽  
pp. 1-37
Author(s):  
Liang Zhao

Events are occurrences in specific locations, time, and semantics that nontrivially impact either our society or the nature, such as earthquakes, civil unrest, system failures, pandemics, and crimes. It is highly desirable to be able to anticipate the occurrence of such events in advance to reduce the potential social upheaval and damage caused. Event prediction, which has traditionally been prohibitively challenging, is now becoming a viable option in the big data era and is thus experiencing rapid growth, also thanks to advances in high performance computers and new Artificial Intelligence techniques. There is a large amount of existing work that focuses on addressing the challenges involved, including heterogeneous multi-faceted outputs, complex (e.g., spatial, temporal, and semantic) dependencies, and streaming data feeds. Due to the strong interdisciplinary nature of event prediction problems, most existing event prediction methods were initially designed to deal with specific application domains, though the techniques and evaluation procedures utilized are usually generalizable across different domains. However, it is imperative yet difficult to cross-reference the techniques across different domains, given the absence of a comprehensive literature survey for event prediction. This article aims to provide a systematic and comprehensive survey of the technologies, applications, and evaluations of event prediction in the big data era. First, systematic categorization and summary of existing techniques are presented, which facilitate domain experts’ searches for suitable techniques and help model developers consolidate their research at the frontiers. Then, comprehensive categorization and summary of major application domains are provided to introduce wider applications to model developers to help them expand the impacts of their research. Evaluation metrics and procedures are summarized and standardized to unify the understanding of model performance among stakeholders, model developers, and domain experts in various application domains. Finally, open problems and future directions are discussed. Additional resources related to event prediction are included in the paper website: http://cs.emory.edu/∼lzhao41/projects/event_prediction_site.html.


Author(s):  
Mehwish Alam ◽  
Aldo Gangemi ◽  
Valentina Presutti ◽  
Diego Reforgiato Recupero

AbstractThis paper introduces , a new semantic role labeling method that transforms a text into a frame-oriented knowledge graph. It performs dependency parsing, identifies the words that evoke lexical frames, locates the roles and fillers for each frame, runs coercion techniques, and formalizes the results as a knowledge graph. This formal representation complies with the frame semantics used in Framester, a factual-linguistic linked data resource. We tested our method on the WSJ section of the Peen Treebank annotated with VerbNet and PropBank labels and on the Brown corpus. The evaluation has been performed according to the CoNLL Shared Task on Joint Parsing of Syntactic and Semantic Dependencies. The obtained precision, recall, and F1 values indicate that TakeFive is competitive with other existing methods such as SEMAFOR, Pikes, PathLSTM, and FRED. We finally discuss how to combine TakeFive and FRED, obtaining higher values of precision, recall, and F1 measure.


2021 ◽  
Vol 30 ◽  
pp. 396
Author(s):  
Patrick D. Elliott ◽  
Yasutada Sudo

Crossover (CO) is a constraint on anaphoric dependencies, according to which, quantifier scope can feed pronominal anaphora unless the anaphoric expression precedes the quantifier. We demonstrate that effects reminiscent of CO arise with presupposition as well, and propose to generalise CO as follows: Projective content (quantifier scope, presupposition projection, etc.) feeds semantic dependencies (pronominal anaphora, presupposition satisfaction), unless the semantically dependent expression precedes the trigger of the projective content. We call this generalisation, Generalised Crossover (GCO). Although we cannot offer a full explanation for GCO in this paper, we will discuss its implications for recent theories of CO.


2021 ◽  
Vol 9 ◽  
pp. 226-242
Author(s):  
Zhaofeng Wu ◽  
Hao Peng ◽  
Noah A. Smith

Abstract For natural language processing systems, two kinds of evidence support the use of text representations from neural language models “pretrained” on large unannotated corpora: performance on application-inspired benchmarks (Peters et al., 2018, inter alia), and the emergence of syntactic abstractions in those representations (Tenney et al., 2019, inter alia). On the other hand, the lack of grounded supervision calls into question how well these representations can ever capture meaning (Bender and Koller, 2020). We apply novel probes to recent language models— specifically focusing on predicate-argument structure as operationalized by semantic dependencies (Ivanova et al., 2012)—and find that, unlike syntax, semantics is not brought to the surface by today’s pretrained models. We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning, yielding benefits to natural language understanding (NLU) tasks in the GLUE benchmark. This approach demonstrates the potential for general-purpose (rather than task-specific) linguistic supervision, above and beyond conventional pretraining and finetuning. Several diagnostics help to localize the benefits of our approach.1


Author(s):  
Osovska S.I. ◽  
Siaileva Ye.V.

Мета. Метою представленої статті є встановлення особливостей концептосистеми сучасного німецькомовного медич-ного дискурсу, а саме виявлення ключових одиниць його концептуального простору як ментального ресурсу та визначення взаємозв’язків між ними.Дослідження виконане в межах сучасної когнітивно-дискурсивної парадигми лінгвістичних досліджень, спрямованої, зокрема, на встановлення мисленнєво-мовленнєвих особливостей дискурсивних практик етнокультурних спільнот, особли-востей комплементарності їхніх ментальних і вербальних ресурсів, що дає змогу описати їх ментальну зумовленість.Методи. У роботі була використана методика когнітивного картування, шляхом якої здійснено інтерпретацію смислового плану тексту як складника дискурсу. Вона складалась із поетапного застосування концептуального методу – для встановлення множини концептів-автохтонів та аналізу регулярних суміжних пар, шляхом якого вдалося встановити пари концептів, що вжи-ваються в найближчому контексті, кількісного аналізу, який обґрунтував закономірність встановлених пар для цього типу дис-курсу, а також логіко-семантичного аналізу, на основі якого сформульовані логічні інтерпретації – основні пресупозиції в основі сучасного німецькомовного медичного дискурсу.Результати. На матеріалі 500 дискурсивних актів визначено 2367 концептів, об’єднаних за змістовним принципом у 14 доменів, з яких об’єктивно встановлено 65 концептів-автохтонів, що створюють каркас концептосистеми сучасного німецькомовного медичного дискурсу, а також регулярні міжавтохтонні кореляції імплікації, слідування, каузації та координа-ції, що демонструють певні смислові залежності у свідомості його учасників. Це дало змогу представити встановлені концеп-ти як стрижні німецького уявлення про медичну сферу, зіставити його когнітивну репрезентованість та мовну об’єктивацію.Висновки. Через експлікацію основної структури концептосистеми досліджуваної дискурсивної практики сформульовані ключові етноспецифічні пресупозиції основних актантів сучасного німецькомовного медичного дискурсу, які засвідчують, що кооперативна партнерська атмосфера між сучасним німецьким пацієнтом та лікарем формується як симбіоз усвідомлення й дотримання прийнятих у суспільстві законів, норм поведінки, цінностей, професійних обов’язків у процесі здійснення кон-кретної лікарської діяльності, а також висловлення емоційної підтримки один одного. Purpose. The purpose of this article is to establish features of the conceptual system of modern German language medical discourse, namely to identify the key units of its mental resource as conceptual space, and to determine relationships between them.The research is performed within the modern cognitive-discursive paradigm of linguistic research, aimed, among other things, at establishing mental and speech features of discursive practices of ethnocultural communities, features of complementarity of their mental and verbal resources, which allows to describe their mental conditionality.Methods. The method of cognitive mapping was used in the research, by which interpretation of the semantic plan of text as a component of discourse was carried out. It consisted of a step-by-step application of a conceptual method - to establish a set of autochtonous concepts and analysis of adjacent pairs, which established pairs of concepts used in an immediate context, quantitative analysis which justified the regularity of established pairs for this type of discourse, and logical-semantic analysis, on the basis of which logical interpretations were formulated – the main presuppositions that are in the basis of modern German language medical discourse.Results. Based on 500 discursive acts were identified 2367 concepts, united by a substantive principle in 14 domains, which 65 autochthonous concepts were objectively established, creating the framework of the conceptual system of modern German-language medical discourse, as well as regular inter-autochthonous correlations of implication, following, causation and coordination, demonstrating certain semantic dependencies in the minds of its participants. As result, it allowed to present the established concepts as the core of the German idea of the medical field, to compare its cognitive representation and linguistic objectification.Conclusions. Through the explication of the basic structure of the conceptual system of the studied discursive practice, key ethnospecific presuppositions of the main actants of modern German-language medical discourse are formulated, which testify that the cooperative partnership atmosphere between in the process of carrying out specific medical activities, as well as expressing emotional support for each other.


Sign in / Sign up

Export Citation Format

Share Document