scholarly journals IDENTIFICATION OF HIDDEN VULNERABILITIES IN THE SOURCE CODE MULTI-THREAD PROGRAMS BY ANALYSIS OF FUNCTIONAL TRANSITIONS

Author(s):  
D. A. Morgunov ◽  

The article presents a new set-theoretic model and procedures that reduce the time required to detect hidden vulnerabilities in the source code of multi-threaded computer programs, as well as the results of mathematical modeling. Hidden vulnerabilities in the article are under-stood as vulnerabilities leading to data races and deadlocks, since they have a stochastic nature of manifestation during testing, which greatly complicates their identification. The presented model describes the state of each thread of a multi-threaded computer program currently exe-cuting a function and the contents of the function call stack. At the same time, it remains pos-sible to use the model in verification by the Model Checking method, and also eliminates the need to solve the problem of searching for the model invariant. The presented procedures make it possible to formulate specifications for the verification method on models, the implementa-tion of which makes it possible to identify vulnerabilities leading to data races and deadlocks in the source code of multithreaded programs


Author(s):  
S.S. Dzhimak ◽  
M.I. Drobotenko ◽  
A.A. Basov ◽  
A.A. Svidlov ◽  
M.G. Baryshev

The evaluation results of the possible deuterium atoms effect on the DNA base pair opening are presented in the article. The cause of these processes is the replacement of protium with deuterium atom due to the increase of energy required to break the hydrogen bond. These processes can be studied by method of mathematical modeling, with account of open states between base pairs being the key condition of the adequacy of the mathematical model of the DNA. The experiment data show that the presence of deuterium in a chain of nucleotides can cause - depending on the value of hydrogen bond disruption energy - both increase and decrease in probability of open states occurrence. For example: hydrogen bond disruption energy of 0.358·10-22 n·m, non-zero probability of open states occurrence is observed in case of the absence of deuterium in the molecule, and with hydrogen bond disruption energy of 0.359·10-22 n·m or more such probability equals zero. Also, when one deuterium atom is present in a molecule, non-zero probability is observed even with hydrogen bond disruption energy equal to 0.368·10-22 n·m (i.e. more than 0.358·10-22 n·m). Thus participation of deuterium atoms in the formation of hydrogen bonds of double helixes of a DNA molecule can cause the changes in the time required for transfer of genetic information, which can explain the effect of even minor deviations in deuterium concentration in a medium on metabolic processes in a living system.



2018 ◽  
Author(s):  
Mohd Suhail Rizvi

AbstractThe transportation of the cargoes in biological cells is primarily driven by the motor proteins on filamentous protein tracks. The stochastic nature of the motion of motor protein often leads to its spontaneous detachment from the track. Using the available experimental data, we demonstrate a tradeoff between the speed of the motor and its rate of spontaneous detachment from the track. Further, it is also shown that this speed-detachment relation follows a power law where its exponent dictates the nature of the motor protein processivity. We utilize this information to study the motion of motor protein on track using a random-walk model. We obtain the average distance travelled in fixed duration and average time required for covering a given distance by the motor protein. These analyses reveal non-monotonic dependence of the motor protein speed on its transport and, therefore, optimal motor speeds can be identified for the time and distance controlled conditions.





2019 ◽  
Vol 2019 ◽  
pp. 1-19
Author(s):  
Z. Yu ◽  
Y. Zuo ◽  
W. C. Xiong

Software transactional memory is an effective mechanism to avoid concurrency bugs in multithreaded programs. However, two problems hinder the adoption of such traditional systems in the wild world: high human cost for equipping programs with transaction functionality and low compatibility with I/O calls and conditional variables. This paper presents Convoider to solve these problems. By intercepting interthread operations and designating code among them as transactions in each thread, Convoider automatically transactionalizes target programs without any source code modification and recompiling. By saving/restoring stack frames and CPU registers on beginning/aborting a transaction, Convoider makes execution flow revocable. By turning threads into processes, leveraging virtual memory protection and customizing memory allocation/deallocation, Convoider makes memory manipulations revocable. By maintaining virtual file systems and redirecting I/O operations onto them, Convoider makes I/O effects revocable. By converting lock/unlock operations to no-ops, customizing signal/wait operations on condition variables, and committing memory changes transactionally, Convoider makes deadlocks, data races, and atomicity violations impossible. Experimental results show that Convoider succeeds in transparently transactionalizing twelve real-world applications with averagely incurring only 28% runtime overhead and perfectly avoid 94% of thirty-one concurrency bugs used in our experiments. This study can help efficiently transactionalize legacy multithreaded applications and effectively improve the runtime reliability of them.



2011 ◽  
Vol 63 (1) ◽  
pp. 1-9 ◽  
Author(s):  
A. Khalifa ◽  
S. Bayoumi ◽  
O. El Monayeri

Mathematical modeling has been a vital tool in the field of environmental engineering. Various models have been developed to simulate the level of aeration efficiency (AE) provided by different aerating structures to raise levels of dissolved oxygen (DO) in streams; one of which is the stepped cascade structure. Three models developed by Gameson et al. WRL, and Nakasone, in addition to Qual2k, a computer program for stream modeling, have been used in this research; values of AEs obtained have been compared to those computed using DO measured from a built model at a WWTP. A stepped cascade structure was installed with different heights to aerate five flowrates with different levels of COD. An adjustment has been made to the Nakasone model to test the effect of pollutant load on the amount of aeration that could be reached. Values of AEs computed using the Gameson model were 30%, 39.5%, and 40% for cascade heights (Hd) 45, 60, and 75 cm respectively for the five flowrates (q) that ranged from 21–66 m3/hr. Values of AEs from WRL model were 32.8%, 42%, and 43% consequently. Values of AEs from Nakasone model ranged from 4.6–7.5%, 6–10%, and 7.6–12% respectively. For the adjusted Nakasone model, values of AEs ranged from 3.2–4.9%, 3.3–5.3%, and 4.1–6.7% respectively. Finally, the AEs computed using the values of downstream DO generated by Qual2k ranged from 4–18%, 2–15%, and 2.5–5.1% correspondingly. Around 80% of the downstream DO values computed using the Nakasone and adjusted Nakasone model were closer to those measured in the field, thus more reliable in cascade design.



2017 ◽  
Vol 23 (2) ◽  
pp. 537-559
Author(s):  
Péter Gyimesi

Identifying fault-prone code parts is useful for the developers to help reduce the time required for locating bugs. It is usually done by characterizing the already known bugs with certain kinds of metrics and building a predictive model from the data. For the characterization of bugs, software product and process metrics are the most popular ones. The calculation of product metrics is supported by many free and commercial software products. However, tools that are capable of computing process metrics are quite rare. In this study, we present a method of computing software process metrics in a graph database. We describe the schema of the database created and we present a way to readily get the process metrics from it. With this technique, process metrics can be calculated at the file, class and method levels. We used GitHub as the source of the change history and we selected 5 open-source Java projects for processing. To retrieve positional information about the classes and methods, we used SourceMeter, a static source code analyzer tool. We used Neo4j as the graph database engine, and its query language - cypher - to get the process metrics. We published the tools we created as open-source projects on GitHub. To demonstrate the utility of our tools, we selected 25 release versions of the 5 Java projects and calculated the process metrics for all of the source code elements (files, classes and methods) in these versions. Using our previous published bug database, we built bug databases for the selected projects that contain the computed process metrics and the corresponding bug numbers for files and classes. (We published these databases as an online appendix.) Then we applied 13 machine learning algorithms on the database we created to find out if it is feasible for bug prediction purposes. We achieved F-measure values on average of around 0.7 at the class level, and slightly better values of between 0.7 and 0.75 at the file level. The best performing algorithm was the RandomForest method for both cases.



Author(s):  
Д.В. Леонтьев ◽  
Д.С. Одякова ◽  
В. Парахин ◽  
Д.И. Харитонов

Предложен подход к моделированию обработки исключительных ситуаций в императивных программах. Рассмотрены проблематика использования исключительных ситуаций в программах, общий подход к автоматическому построению моделей программ, описан минимальный набор шаблонов семантических конструкций, необходимый для построения моделей императивных программ. В качестве примера описан процесс моделирования небольшой программы и приведена ее результирующая модель в композициональном виде. The purpose of the article is to propose an approach to the automatic generation of models of imperative programs with exceptions from the source code. Methodology. The approach defines consecutive transformations of the program beginning from the source code to the parsing tree of the program, then to an abstract semantic graph and finally to a compositional model in terms of Petri nets. Transformations are based on a set of formal principles and relations and can be performed without human intervention purely algorithmically. To build a model from the program abstract semantic graph, templates and composition rules are used. Templates describe in terms of Petri net the basic constructions of imperative programming languages: expressions, branching, loops, choice and function call. Findings. A set of templates for modelling the exception handling mechanism is described. This set includes templates for the try and catch blocks describing the processing of the exception in local places of the program, the throw operator to signal the exception, and the operator of the function call with exceptions. Оriginality/value. The article demonstrates that the proposed set of templates allows building a complete model of the program with exceptions, consisting of several functions. The resulting program model makes it possible to analyze the program behavior by standard for Petri nets formal methods. In particular, a possibility of an abnormal termination due to an exceptional situation can be validated and where each particular exception is handled as well as what exceptions are handled in a particular catch block.



1968 ◽  
Vol 12 ◽  
pp. 391-403 ◽  
Author(s):  
Hung-Chi Chao

AbstractThe texture of sheet metal Is best described, by means of pole figures, which are very expensive and time-consuming to prepare. About 8 to 12 hours of effort by a specially trained, and. highly skilled technician are needed to prepare each pole figure. Accordingly, pole figures are not used as extensively in research studies as they would, be if they could be obtained more easily.A method has been developed for automatically producing pole figures by printing results directly from a digital computer. This method does not require the use of additional plotting attachments and, is therefore less expensive and time consuming than other methods. With this method, any laboratory with access to a digital computer can produce pole figures automatically.X-ray diffraction intensities are recorded on punched tape or on punched cards and are fed into the digital computer. A computer program corrects X-ray data obtained, by either transmission or reflection X-ray techniques, maps the stereographic projection, and prints pole figures directly. The time required, to prepare an accurate pole figure is reduced from 8 to 12 hours to 20 minutes or less depending on the type of digital computer used.



Sign in / Sign up

Export Citation Format

Share Document