Performance Comparison of Software Complexity Metrics in an Open Source Project

Author(s):  
Min Zhang ◽  
Nathan Baddoo
Author(s):  
Faried Effendy ◽  
Taufik ◽  
Bramantyo Adhilaksono

: Substantial research has been conducted to compare web servers or to compare databases, but very limited research combines the two. Node.js and Golang (Go) are popular platforms for both web and mobile application back-ends, whereas MySQL and Go are among the best open source databases with different characters. Using MySQL and MongoDB as databases, this study aims to compare the performance of Go and Node.js as web applications back-end regarding response time, CPU utilization, and memory usage. To simulate the actual web server workload, the flow of data traffic on the server follows the Poisson distribution. The result shows that the combination of Go and MySQL is superior in CPU utilization and memory usage, while the Node.js and MySQL combination is superior in response time.


2021 ◽  
Vol 64 (4) ◽  
pp. 25-27
Author(s):  
George V. Neville-Neil

Respect your staff, learn from others, and know when to let go.


2010 ◽  
Vol 7 (4) ◽  
pp. 769-787 ◽  
Author(s):  
Robertas Damasevicius ◽  
Vytautas Stuikys

The concept of complexity is used in many areas of computer science and software engineering. Software complexity metrics can be used to evaluate and compare quality of software development and maintenance processes and their products. Complexity management and measurement is especially important in novel programming technologies and paradigms, such as aspect-oriented programming, generative programming, and metaprogramming, where complex multilanguage and multi-aspect program specifications are developed and used. This paper analyzes complexity management and measurement techniques, and proposes five complexity metrics (Relative Kolmogorov Complexity, Metalanguage Richness, Cyclomatic Complexity, Normalized Difficulty, Cognitive Difficulty) for measuring complexity of metaprograms at information, metalanguage, graph, algorithm, and cognitive dimensions.


2010 ◽  
Vol 37 ◽  
pp. 141-188 ◽  
Author(s):  
P. D. Turney ◽  
P. Pantel

Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.


Author(s):  
Brian Granger ◽  
Fernando Pérez

Project Jupyter is an open-source project for interactive computing widely used in data science, machine learning, and scientific computing. We argue that even though Jupyter helps users perform complex, technical work, Jupyter itself solves problems that are fundamentally human in nature. Namely, Jupyter helps humans to think and tell stories with code and data. We illustrate this by describing three dimensions of Jupyter: interactive computing, computational narratives, and  the idea that Jupyter is more than software. We illustrate the impact of these dimensions on a community of practice in Earth and climate science.


2013 ◽  
Vol 24 (2) ◽  
pp. 312-333 ◽  
Author(s):  
Sherae Daniel ◽  
Ritu Agarwal ◽  
Katherine J. Stewart

Sign in / Sign up

Export Citation Format

Share Document