scholarly journals Novel Approach for software metrics Sharing

Author(s):  
И.А. Хомяков

Сбор метрик программного обеспечения является фундаментальной деятельностью, которая необходима для проведния практически любого эмпирического исследования в области программной инженерии. Однако, даже при наличии широкого спектра инструментов, сбор таких фундаментальных данных по-прежнему занимает много времени. Более того, каждый исследователь собирает практически одни и те же данные (например, метрики CK, цикломатическая сложность МакКейба и т.д.) из практически одних и тех же проектов (например, из известных проектов с открытым исходным кодом). Объем такой дублирующей работы, выполняемой в сообществе, уменьшает усилия, которые исследователи могут потратить на наиболее ценную часть своих исследований, такую как разработка новых теорий и моделей и их эмпирическая оценка. В данной работе предлагается новый подход для сбора и обмена данными метрик программного обеспечения, позволяющий сотрудничать исследователям и сократить количество напрасных усилий в сообществе разработчиков программного обеспечения. Мы стремимся достичь этой цели, предлагая Формат обмена программными метриками (SMEF)и REST API для сбора, хранения и обмена данными метрик программного обеспечения. In almost every empirical software engineering study, software metrics collection is a fundamental activity. Although many tools exist to collect this data, it still takes a considerable amount of time. In addition, almost all researchers collect essentially the same data (e.g., CK metrics, McCabe Cyclomatic Complexity, etc.) from essentially the same sources (e.g., well-known open-source projects).Having so much duplication of work done within a community reduces the amount of time that researchers can spend developing new ideas and evaluating them empirically, which is the most valuable part of their research. In this paper, we propose a novel approach for getting and sharing software metrics data that will allow them to collaborate and reduce the amount of wasted effort. SMEF, a file format for exchanging software metrics information, and a REST API, targeted at this objective, are proposed in this paper.

2016 ◽  
Author(s):  
Mark Lemley

The theory of patent law is based on the idea that a lone genius can solveproblems that stump the experts, and that the lone genius will do so onlyif properly incented. We deny patents on inventions that are "obvious" toordinarily innovative scientists in the field. Our goal is to encourageextraordinary inventions – those that we wouldn’t expect to get without theincentive of a patent.The canonical story of the lone genius inventor is largely a myth. Edisondidn’t invent the light bulb; he found a bamboo fiber that worked better asa filament in the light bulb developed by Sawyer and Man, who in turn builton lighting work done by others. Bell filed for his telephone patent on thevery same day as an independent inventor, Elisha Gray; the case ultimatelywent to the U.S. Supreme Court, which filled an entire volume of U.S.Reports resolving the question of whether Bell could have a patent despitethe fact that he hadn’t actually gotten the invention to work at the timehe filed. The Wright Brothers were the first to fly at Kitty Hawk, buttheir plane didn’t work very well, and was quickly surpassed by aircraftbuilt by Glenn Curtis and others – planes that the Wrights delayed by overa decade with patent lawsuits.The point can be made more general: surveys of hundreds of significant newtechnologies show that almost all of them are invented simultaneously ornearly simultaneously by two or more teams working independently of eachother. Invention appears in significant part to be a social, not anindividual, phenomenon. Inventors build on the work of those who camebefore, and new ideas are often "in the air," or result from changes inmarket demand or the availability of new or cheaper starting materials. Andin the few circumstances where that is not true – where inventions trulyare "singletons" – it is often because of an accident or error in theexperiment rather than a conscious effort to invent.The result is a real problem for classic theories of patent law. If we aresupposed to be encouraging only inventions that others in the fieldcouldn’t have made, we should be paying a lot more attention than wecurrently do to simultaneous invention. We should issuing very few patents– surely not the 200,000 per year we do today. And we should be denyingpatents on the vast majority of the most important inventions, since mostseem to involve near-simultaneous invention. Put simply, our dominanttheory of patent law doesn’t seem to explain the way we actually implementthat law.Maybe the problem is not with our current patent law, but with our currentpatent theory. But the dominant alternative theories of patent law don’t domuch better. Prospect theory – under which we give patents early to onecompany so it can control research and development – makes little sense ina world in which ideas are in the air, likely to be happened upon bynumerous inventors at about the same time. And commercialization theory,which hypothesizes that we grant patents in order to encourage notinvention but product development, seems to founder on a related historicalfact: most first inventors turn out to be lousy commercializers who end updelaying implementation of the invention by exercising their rights.If patent law in its current form can be saved, we need an alternativejustification for granting patents even in circumstances ofnear-simultaneous invention. I consider two other possibilities. First,patent rights encourage patent races, and that might actually be a goodthing. Second, patents might facilitate markets for technology. Both havesome logic to them, but neither fully justifies patent law in its currentform. As a result, I offer some suggestions for reforming patent law totake account of the prevalence of simultaneous invention.


2018 ◽  
Vol 2 (5) ◽  
pp. 682
Author(s):  
Masniar Masniar

Various difficulties in learning English which have been an obstacle for almost all students, this should be avaluable lesson to spark new ideas in group learning implementation programs. To overcome the problem of thelow level of English learning outcomes of class VII students of Bangkinang State 2 Junior High School inKampar Regency, group learning is one good alternative. The study is a classroom action research conducted inBangkinang Kota 2 Public Middle School, Kampar district. The subjects of this study were seventh gradestudents. The results of the study obtained data on teacher activity in the first cycle of the first meeting with apercentage of 57%, the second meeting with a percentage of 66.5%, in the second cycle at the third meeting thepercentage was 83.5%, and at the fourth meeting percentage obtained 90.5%. The observation data of studentsin the first cycle of the 1st meeting was 51%, the second meeting was 62.5%, in the second cycle the thirdmeeting was 80%, and the fourth meeting was 88%. Data on the improvement of learning outcomes in the initialdata obtained an average of 63, in daily I repetition of 75, and in the second daily test of 88.


Author(s):  
Seetharam .K ◽  
Sharana Basava Gowda ◽  
. Varadaraj

In Software engineering software metrics play wide and deeper scope. Many projects fail because of risks in software engineering development[1]t. Among various risk factors creeping is also one factor. The paper discusses approximate volume of creeping requirements that occur after the completion of the nominal requirements phase. This is using software size measured in function points at four different levels. The major risk factors are depending both directly and indirectly associated with software size of development. Hence It is possible to predict risk due to creeping cause using size.


1997 ◽  
Vol 6 (5) ◽  
pp. 547-564 ◽  
Author(s):  
David R. Pratt ◽  
Shirley M. Pratt ◽  
Paul T. Barham ◽  
Randall E. Barker ◽  
Marianne S. Waldrop ◽  
...  

This paper examines the representation of humans in large-scale, networked virtual environments. Previous work done in this field is summarized, and existing problems with rendering, articulating, and networking numerous human figures in real time are explained. We have developed a system that integrates together some well-known solutions along with new ideas. Models with multiple level of details, body-tracking technology and animation libraries to specify joint angles, efficient group representations to describe multiple humans, and hierarchical network protocols have been successfully employed to increase the number of humans represented, system performance, and user interactivity. The resulting system immerses participants effectively and has numerous useful applications.


2021 ◽  
Vol 109 (4) ◽  
Author(s):  
Anson Parker ◽  
Abbey Heflin ◽  
Lucy Carr Jones

As part of a larger project to understand the publishing choices of UVA Health authors and support open access publishing, a team from the Claude Moore Health Sciences Library analyzed an open data set from Europe PMC, which includes metadata from PubMed records. We used the Europe PMC REST API to search for articles published in 2017–2020 with “University of Virginia” in the author affiliation field. Subsequently, we parsed the JSON metadata in Python and used Streamlit to create a data visualization from our public GitHub repository. At present, this shows the relative proportions of open access versus subscription-only articles published by UVA Health authors. Although subscription services like Web of Science, Scopus, and Dimensions allow users to do similar analyses, we believe this is a novel approach to doing this type of bibliometric research with open data and open source tools.  


Author(s):  
Timothy K. Perttula

The Shelby site (41CP71) is an important Late Caddo period, Titus phase, religious and political center on Greasy Creek in the Northeast Texas Pineywoods. The site, occupied from the 15th century A.D. until at least the late 17th century A.D., is a large and well-preserved settlement with abundant habitation features as well as plant and animal remains, evidence of mound building activities in the form of a 1.5 m high structural mound, and a large community cemetery with at least 119 burial pits and perhaps as many as 200. The Shelby site is the nexus of one of a number of Titus phase political communities in the Big Cypress Creek stream basin. Nevertheless, very little is known archaeologically about the site—or the history of the Caddo’s settlement there—since almost all the work done at the site since it was discovered in 1979 has been by looters. Perttula and Nelson completed a limited amount of work in the village area in 2003, and Bob Turner and others worked in the 1.5 m high structural mound between 1985-1988, but an overall synthesis of the Caddo occupation at the Shelby site awaits more extensive professional archaeological investigations. One key step in any professional archaeological work that may be forthcoming at the site includes the documentation of Caddo material culture remains, especially Caddo ceramics, that are known to have come from the site, as they provide a record of the temporal, functional, and stylistic range of the ceramic vessels used and discarded at the site, as well as evidence of interaction and contact between different but contemporaneous Caddo groups. In August 2009, I had an opportunity to document a collection of Caddo ceramic sherds held by Vernon Holcomb from the Shelby site. He collected these sherds from the surface of the site some 25-30 years ago where they had been eroded out of the banks of a dry or intermittent stream branch that drains north to Greasy Creek.


2015 ◽  
pp. 997-1012
Author(s):  
Jagadeesh Nandigam ◽  
Venkat N Gudivada

This chapter describes a pragmatic approach to using open source and free software tools as valuable resources to affect learning of software industry practices using iterative and incremental development methods. The authors discuss how the above resources are used in teaching undergraduate Software Engineering (SE) courses. More specifically, they illustrate iterative and incremental development, documenting software requirements, version control and source code management, coding standards compliance, design visualization, software testing, software metrics, release deliverables, software engineering ethics, and professional practices. The authors also present how they positioned the activities of this course to qualify it for writing intensive designation. End of semester course evaluations and anecdotal evidence indicate that the proposed approach is effective in educating students in software industry practices.


2011 ◽  
pp. 1172-1181
Author(s):  
S. Parthasarathy

Business information system is an area of the greatest significance in any business enterprise today. Enterprise Resource Planning (ERP) projects are a growing segment of this vital area. Software engineering metrics are units of measurement used to characterize the software engineering products and processes. The research about the software process has acquired great importance in the last few years due to the growing interest of software companies in the improvement of their quality. Enterprise Resource Planning (ERP) projects are very complex products, and this fact is directly linked to their development and maintenance. One of the major reasons found in the literature for the failure of ERP projects is the poor management of software processes. In this chapter, the authors propose a Software Metrics Plan (SMP) containing different software metrics to manage software processes during ERP implementation. Two hypotheses have been formulated and tested using statistical techniques to validate the SMP. The statistical analysis of the collected data from an ERP project supports the two hypotheses, leading to the conclusion that the software metrics are momentous in ERP projects.


Commercial-off-the-shelf (COTS) Simulation Packages (CSPs) are widely used in industry primarily due to economic factors associated with developing proprietary software platforms. Regardless of their widespread use, CSPs have yet to operate across organizational boundaries. The limited reuse and interoperability of CSPs are affected by the same semantic issues that restrict the inter-organizational use of software components and web services. The current representations of Web components are predominantly syntactic in nature lacking the fundamental semantic underpinning required to support discovery on the emerging Semantic Web. The authors present new research that partially alleviates the problem of limited semantic reuse and interoperability of simulation components in CSPs. Semantic models, in the form of ontologies, utilized by the authors’ Web service discovery and deployment architecture, provide one approach to support simulation model reuse. Semantic interoperation is achieved through a simulation component ontology that is used to identify required components at varying levels of granularity (i.e. including both abstract and specialized components). Selected simulation components are loaded into a CSP, modified according to the requirements of the new model and executed. The research presented here is based on the development of an ontology, connector software, and a Web service discovery architecture. The ontology is extracted from example simulation scenarios involving airport, restaurant and kitchen service suppliers. The ontology engineering framework and discovery architecture provide a novel approach to inter-organizational simulation, by adopting a less intrusive interface between participants Although specific to CSPs this work has wider implications for the simulation community. The reason being that the community as a whole stands to benefit through from an increased awareness of the state-of-the-art in Software Engineering (for example, ontology-supported component discovery and reuse, and service-oriented computing), and it is expected that this will eventually lead to the development of a unique Software Engineering-inspired methodology to build simulations in future.


Sign in / Sign up

Export Citation Format

Share Document