function call
Recently Published Documents


TOTAL DOCUMENTS

130
(FIVE YEARS 42)

H-INDEX

9
(FIVE YEARS 1)

2022 ◽  
Vol E105.D (1) ◽  
pp. 19-20
Author(s):  
Bodin CHINTHANET ◽  
Raula GAIKOVINA KULA ◽  
Rodrigo ELIZA ZAPATA ◽  
Takashi ISHIO ◽  
Kenichi MATSUMOTO ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Chia-Yi Wu ◽  
Tao Ban ◽  
Shin-Ming Cheng ◽  
Bo Sun ◽  
Takeshi Takahashi

2021 ◽  
Author(s):  
Gáspár Lukács ◽  
Andreas Gartus

Conducting research via the internet is a formidable and ever-increasingly popular option for behavioral scientists. However, it is widely acknowledged that web-browsers are not optimized for research: In particular, the timing of display changes (e.g., a stimulus appearing on the screen), still leaves room for improvement. So far, the typically recommended best (or least worst) timing method has been a single requestAnimationFrame (RAF) JavaScript function call within which one would give the display command and obtain the time of that display change. In our Study 1, we assessed two alternatives: Calling the RAF twice consecutively, or calling the RAF during a continually ongoing independent loop of recursive RAF calls. While the former has shown little or no improvement as compared to single RAF calls, with the latter we significantly and substantially improved overall precision, and achieved practically faultless precision in most practical cases. In Study 2, we reassessed this “RAF loop” timing method with images in combination with three different display methods: We found that the precision remained high when using either visibility or opacity changes – while drawing on a canvas element consistently led to comparatively lower precision. We recommend the “RAF loop” display timing method for improved precision in future studies, and visibility or opacity changes when using image stimuli. We have also shared, in public repositories, the easy-to-use code for this method, exactly as employed in our studies.


2021 ◽  
Author(s):  
Nico Braunisch ◽  
Sven Schlesinger ◽  
Robert Lehmann

Author(s):  
Sourabh S Badhya ◽  
◽  
Shobha G ◽  

As software systems evolve, there is a growing concern on how to manage and maintain a large codebase and fully understand all the modules present in it. Developers spend a significant amount of time analyzing dependencies before making any changes into codebases. Therefore, there is a growing need for applications which can easily make developers comprehend dependencies in large codebases. These applications must be able to analyze large codebases and must have the ability to identify all the dependencies, so that new developers can easily analyze the codebase and start making changes in short periods of time. Static analysis provides a means of analyzing dependencies in large codebases and is an important part of software development lifecycle. Static analysis has been proven to be extremely useful over the years in their ability to comprehend large codebases. Out of the many static analysis methods, this paper focuses on static function call graph (SFCG) which represents dependencies between functions in the form of a graph. This paper illustrates the feasibility of many tools which generate SFCG and locks in on Doxygen which is extremely reliant for large codebases. The paper also discusses the optimizations, issues and its corresponding solutions for Doxygen. Finally, this paper presents a way of representing SFCG which is easier to comprehend for developers.


Author(s):  
Dirk Beyer

AbstractTool competitions are a special form of comparative evaluation, where each tool has a team of developers or supporters associated that makes sure the tool is properly configured to show its best possible performance. In several research areas, tool competitions have been a driving force for the development of mature tools that represent the state of the art in their field. This paper describes and reports the results of the 1$$^{\text {st}}$$ st International Competition on Software Testing (Test-Comp 2019), a comparative evaluation of automatic tools for software test generation. Test-Comp 2019 was presented as part of TOOLympics 2019, a satellite event of the conference TACAS. Nine test generators were evaluated on 2 356 test-generation tasks. There were two test specifications, one for generating a test that covers a particular function call and one for generating a test suite that tries to cover the branches of the program.


Sign in / Sign up

Export Citation Format

Share Document