A Rule-based Automated Approach for Extracting Models from Source Code

Author(s):  
Makoto Ichii ◽  
Tomoyuki Myojin ◽  
Yuichiroh Nakagawa ◽  
Masaki Chikahisa ◽  
Hideto Ogawa
Keyword(s):  
2021 ◽  
Author(s):  
Pasquale Minervini ◽  
Sebastian Riedel ◽  
Pontus Stenetorp ◽  
Edward Grefenstette ◽  
Tim Rocktäschel

Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable. All source code and datasets are available online. (At https://github.com/uclnlp/ctp)


In many software systems logging has been implemented inaccurately, their effectiveness during the maintenance period to identify the failures and address them quickly is very less. This in turn increases the software maintenance cost and reduces reliability of the system as many errors are unreported. This paper aims at proposing and studying a rule based approach to make the logs more effective. The source code of the target systems gets reverse engineered and acts as the primary input for this approach to introduce the automated logs into the source code. This is instrumented by a logger code driven by a set of predefined rules which are woven around the life cycle of the system entities. The validity of the approach is verified by means of a preliminary fault injection experiment into a real world system.


Author(s):  
Amir Hossein Arshia ◽  
Amir Hossein Rasekh ◽  
Mohammad Reza Moosavi ◽  
Seyed Mostafa Fakhrahmad ◽  
Mohammad Hadi Sadreddini

Abstract Correctness of the designed system is one of the most important issues in the software development process. Therefore, various tests have been defined and designed to help software teams develop software with little or no problem. Finding a proper link between test class and the class under the test is an important but difficult task. Finding this relation helps the developers to conduct regression tests more efficiently. In this paper, we seek to propose a model for recovering traceable links between test classes and the classes under the test. The proposed method encompasses three parts: (1) method for extracting keywords and the measure of similarity of a specific part of code, (2) backward chain method based on a rule-based system, (3) using hybrid model to find traceable links between test classes and the code under test. This study uses three open-source and one industrial source projects to conduct experiments. The results are satisfactory compared to previous studies.


MATICS ◽  
2016 ◽  
Vol 8 (1) ◽  
pp. 40
Author(s):  
Ainatul Mardhiyah ◽  
Puji Mahanani ◽  
A’la Syauqi

<span style="font-family: TimesNewRomanPS-BoldItalicMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong><em>Abstract</em></strong><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>– String Matching translated as sequence</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>of processes that must be passed to resolve the problems in</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>finding a pattern arrangement of character strings with</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>each other exactly in the same structure. Brute-Force</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>Algorithm explain the concept of matching strings in a line</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>theory and source code. Consider the advantages and</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>disadvantages that exist, the expected application of</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>existing theory on Brute-Force algorithm can simplify the</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>process of checking the suitability of the user answers with</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>the answer key for building applications on the Java script</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>learning among students as the students have to pass on</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>the existence of java script. This application need</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>0.003210462 second for lesson in level 1. And need</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>0.003294419 second for lesson in level 2, 0.003478261</strong><br /><span style="font-family: TimesNewRomanPS-BoldMT; font-size: 9pt; color: #000000; font-style: normal; font-variant: normal;"><strong>second for lesson in level 3.</strong></span></span></span></span></span></span></span></span></span></span></span></span></span></span><br style="font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px;" /></span></span>


1992 ◽  
Vol 23 (1) ◽  
pp. 52-60 ◽  
Author(s):  
Pamela G. Garn-Nunn ◽  
Vicki Martin

This study explored whether or not standard administration and scoring of conventional articulation tests accurately identified children as phonologically disordered and whether or not information from these tests established severity level and programming needs. Results of standard scoring procedures from the Assessment of Phonological Processes-Revised, the Goldman-Fristoe Test of Articulation, the Photo Articulation Test, and the Weiss Comprehensive Articulation Test were compared for 20 phonologically impaired children. All tests identified the children as phonologically delayed/disordered, but the conventional tests failed to clearly and consistently differentiate varying severity levels. Conventional test results also showed limitations in error sensitivity, ease of computation for scoring procedures, and implications for remediation programming. The use of some type of rule-based analysis for phonologically impaired children is highly recommended.


Author(s):  
Bettina von Helversen ◽  
Stefan M. Herzog ◽  
Jörg Rieskamp

Judging other people is a common and important task. Every day professionals make decisions that affect the lives of other people when they diagnose medical conditions, grant parole, or hire new employees. To prevent discrimination, professional standards require that decision makers render accurate and unbiased judgments solely based on relevant information. Facial similarity to previously encountered persons can be a potential source of bias. Psychological research suggests that people only rely on similarity-based judgment strategies if the provided information does not allow them to make accurate rule-based judgments. Our study shows, however, that facial similarity to previously encountered persons influences judgment even in situations in which relevant information is available for making accurate rule-based judgments and where similarity is irrelevant for the task and relying on similarity is detrimental. In two experiments in an employment context we show that applicants who looked similar to high-performing former employees were judged as more suitable than applicants who looked similar to low-performing former employees. This similarity effect was found despite the fact that the participants used the relevant résumé information about the applicants by following a rule-based judgment strategy. These findings suggest that similarity-based and rule-based processes simultaneously underlie human judgment.


Sign in / Sign up

Export Citation Format

Share Document