scholarly journals Neutron: an attention-based neural decompiler

Cybersecurity ◽  
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Ruigang Liang ◽  
Ying Cao ◽  
Peiwei Hu ◽  
Kai Chen

AbstractDecompilation aims to analyze and transform low-level program language (PL) codes such as binary code or assembly code to obtain an equivalent high-level PL. Decompilation plays a vital role in the cyberspace security fields such as software vulnerability discovery and analysis, malicious code detection and analysis, and software engineering fields such as source code analysis, optimization, and cross-language cross-operating system migration. Unfortunately, the existing decompilers mainly rely on experts to write rules, which leads to bottlenecks such as low scalability, development difficulties, and long cycles. The generated high-level PL codes often violate the code writing specifications. Further, their readability is still relatively low. The problems mentioned above hinder the efficiency of advanced applications (e.g., vulnerability discovery) based on decompiled high-level PL codes.In this paper, we propose a decompilation approach based on the attention-based neural machine translation (NMT) mechanism, which converts low-level PL into high-level PL while acquiring legibility and keeping functionally similar. To compensate for the information asymmetry between the low-level and high-level PL, a translation method based on basic operations of low-level PL is designed. This method improves the generalization of the NMT model and captures the translation rules between PLs more accurately and efficiently. Besides, we implement a neural decompilation framework called Neutron. The evaluation of two practical applications shows that Neutron’s average program accuracy is 96.96%, which is better than the traditional NMT model.

Author(s):  
Steven J. DeRose

XML can be as easy to work with as JSON. However, this has not been obvious until now. JSON is easy because it supports only datatypes that are already native to Javascript and uses the same syntax to access them (such as [1:10], ["x"], and “.” notation). XML, on the other hand, supports additional datatypes, and is most commonly handled via SAX or DOM, both of which are low-level and meant to be cross-language. Typical developers want high-level access that feels “native” in the language they are using. These shortcomings have little or nothing to do with XML, and can be remedied by a different API. Software that demonstrates this is presented and described. It uses Python's richer set of abstract datatypes (such as tuples and sets), and provides native Python style syntax with richer semantics than JSON or Javascript.


Author(s):  
Zhihao Fan ◽  
Zhongyu Wei ◽  
Siyuan Wang ◽  
Ruize Wang ◽  
Zejun Li ◽  
...  

Existing research for image captioning usually represents an image using a scene graph with low-level facts (objects and relations) and fails to capture the high-level semantics. In this paper, we propose a Theme Concepts extended Image Captioning (TCIC) framework that incorporates theme concepts to represent high-level cross-modality semantics. In practice, we model theme concepts as memory vectors and propose Transformer with Theme Nodes (TTN) to incorporate those vectors for image captioning. Considering that theme concepts can be learned from both images and captions, we propose two settings for their representations learning based on TTN. On the vision side, TTN is configured to take both scene graph based features and theme concepts as input for visual representation learning. On the language side, TTN is configured to take both captions and theme concepts as input for text representation re-construction. Both settings aim to generate target captions with the same transformer-based decoder. During the training, we further align representations of theme concepts learned from images and corresponding captions to enforce the cross-modality learning. Experimental results on MS COCO show the effectiveness of our approach compared to some state-of-the-art models.


Author(s):  
Zhiwei Shi ◽  
Zhongzhi Shi ◽  
Hong Hu

Traditionally, how to bridge the gap between low-level visual features and high-level semantic concepts has been a tough task for researchers. In this article, we propose a novel plausible model, namely cellular Bayesian networks (CBNs), to model the process of visual perception. The new model takes advantage of both the low-level visual features, such as colors, textures, and shapes, of target objects and the interrelationship between the known objects, and integrates them into a Bayesian framework, which possesses both firm theoretical foundation and wide practical applications. The novel model successfully overcomes some weakness of traditional Bayesian Network (BN), which prohibits BN being applied to large-scale cognitive problem. The experimental simulation also demonstrates that the CBNs model outperforms purely Bottom-up strategy 6% or more in the task of shape recognition. Finally, although the CBNs model is designed for visual perception, it has great potential to be applied to other areas as well.


2019 ◽  
Vol 1 (1) ◽  
pp. 31-39
Author(s):  
Ilham Safitra Damanik ◽  
Sundari Retno Andani ◽  
Dedi Sehendro

Milk is an important intake to meet nutritional needs. Both consumed by children, and adults. Indonesia has many producers of fresh milk, but it is not sufficient for national milk needs. Data mining is a science in the field of computers that is widely used in research. one of the data mining techniques is Clustering. Clustering is a method by grouping data. The Clustering method will be more optimal if you use a lot of data. Data to be used are provincial data in Indonesia from 2000 to 2017 obtained from the Central Statistics Agency. The results of this study are in Clusters based on 2 milk-producing groups, namely high-dairy producers and low-milk producing regions. From 27 data on fresh milk production in Indonesia, two high-level provinces can be obtained, namely: West Java and East Java. And 25 others were added in 7 provinces which did not follow the calculation of the K-Means Clustering Algorithm, including in the low level cluster.


Author(s):  
Margarita Khomyakova

The author analyzes definitions of the concepts of determinants of crime given by various scientists and offers her definition. In this study, determinants of crime are understood as a set of its causes, the circumstances that contribute committing them, as well as the dynamics of crime. It is noted that the Russian legislator in Article 244 of the Criminal Code defines the object of this criminal assault as public morality. Despite the use of evaluative concepts both in the disposition of this norm and in determining the specific object of a given crime, the position of criminologists is unequivocal: crimes of this kind are immoral and are in irreconcilable conflict with generally accepted moral and legal norms. In the paper, some views are considered with regard to making value judgments which could hardly apply to legal norms. According to the author, the reasons for abuse of the bodies of the dead include economic problems of the subject of a crime, a low level of culture and legal awareness; this list is not exhaustive. The main circumstances that contribute committing abuse of the bodies of the dead and their burial places are the following: low income and unemployment, low level of criminological prevention, poor maintenance and protection of medical institutions and cemeteries due to underperformance of state and municipal bodies. The list of circumstances is also open-ended. Due to some factors, including a high level of latency, it is not possible to reflect the dynamics of such crimes objectively. At the same time, identification of the determinants of abuse of the bodies of the dead will reduce the number of such crimes.


2021 ◽  
pp. 002224372199837
Author(s):  
Walter Herzog ◽  
Johannes D. Hattula ◽  
Darren W. Dahl

This research explores how marketing managers can avoid the so-called false consensus effect—the egocentric tendency to project personal preferences onto consumers. Two pilot studies were conducted to provide evidence for the managerial importance of this research question and to explore how marketing managers attempt to avoid false consensus effects in practice. The results suggest that the debiasing tactic most frequently used by marketers is to suppress their personal preferences when predicting consumer preferences. Four subsequent studies show that, ironically, this debiasing tactic can backfire and increase managers’ susceptibility to the false consensus effect. Specifically, the results suggest that these backfire effects are most likely to occur for managers with a low level of preference certainty. In contrast, the results imply that preference suppression does not backfire but instead decreases false consensus effects for managers with a high level of preference certainty. Finally, the studies explore the mechanism behind these results and show how managers can ultimately avoid false consensus effects—regardless of their level of preference certainty and without risking backfire effects.


Author(s):  
Richard Stone ◽  
Minglu Wang ◽  
Thomas Schnieders ◽  
Esraa Abdelall

Human-robotic interaction system are increasingly becoming integrated into industrial, commercial and emergency service agencies. It is critical that human operators understand and trust automation when these systems support and even make important decisions. The following study focused on human-in-loop telerobotic system performing a reconnaissance operation. Twenty-four subjects were divided into groups based on level of automation (Low-Level Automation (LLA), and High-Level Automation (HLA)). Results indicated a significant difference between low and high word level of control in hit rate when permanent error occurred. In the LLA group, the type of error had a significant effect on the hit rate. In general, the high level of automation was better than the low level of automation, especially if it was more reliable, suggesting that subjects in the HLA group could rely on the automatic implementation to perform the task more effectively and more accurately.


2020 ◽  
Vol 4 (POPL) ◽  
pp. 1-32 ◽  
Author(s):  
Michael Sammler ◽  
Deepak Garg ◽  
Derek Dreyer ◽  
Tadeusz Litak
Keyword(s):  

2021 ◽  
pp. 0308518X2199781
Author(s):  
Xinyue Luo ◽  
Mingxing Chen

The nodes and links in urban networks are usually presented in a two-dimensional(2D) view. The co-occurrence of nodes and links can also be realized from a three-dimensional(3D) perspective to make the characteristics of urban network more intuitively revealed. Our result shows that the external connections of high-level cities are mainly affected by the level of cities(nodes) and less affected by geographical distance, while medium-level cities are affected by the interaction of the level of cities(nodes) and geographical distance. The external connections of low-level cities are greatly restricted by geographical distance.


Sign in / Sign up

Export Citation Format

Share Document