scholarly journals Assessing JavaScript API Deprecation

Author(s):  
Romulo Nascimento ◽  
Eduardo Figueiredo ◽  
Andre Hora

Building an application using third-party libraries is a common practice in software development. As any other software system, code libraries and their APIs evolve over time. In order to help version migration and ensure backward compatibility, a recommended practice during development is to deprecate API. Although studies have been conducted to investigate deprecation in some programming languages, such as Java and C#, there are no detailed studies on API deprecation in the JavaScript ecosystem. The goal of this master research work is to investigate deprecation of JavaScript APIs. In a first assessment, we analyzed popular software projects to identify API deprecation occurrences and classify them. We are now conducting a survey study with developers to understand their thoughts and experiences on JavaScript API deprecation. Lastly, we plan to develop a set of JavaScript API deprecation guidelines based on this master research result. Initial results suggest that the use of deprecation mechanisms in JavaScript packages is low. However, we were able to identify five different approaches that developers primarily use to deprecate APIs in the studied projects. Among these solutions, deprecation utility (i.e., any sort of function specially written to aid deprecation) and code comments are the most common practices in JavaScript. Finally, we found that the rate of helpful message is high: 67% of the deprecation occurrences have replacement messages to support developers when migrating APIs.

Author(s):  
Jungil Kim ◽  
Eunjoo Lee

GitHub and Stack Overflow are often used together for software development. GH-SO users, who use both GitHub and Stack Overflow, contribute to the development of various software projects in GitHub and share their knowledge and experience on software development in Stack Overflow. To widely understand the interests and working habits of GH-SO users on software development, it is important to investigate how GH-SO users utilize GitHub and Stack Overflow. In this paper, we present an exploratory study on GitHub commit and Stack Overflow post activities of GH-SO users. Specifically, we investigate the working habits of GH-SO users on GitHub commit and Stack Overflow post activities. We randomly selected 19,756 of GH-SO users as our target sample and collected 2,819,483 and 2,147,317 of commit activity data and post activity data of the GH-SO users. We then categorized the collected commit and post activity datasets into specific categories on programming languages and statistically analyzed the categorized commit and post activity datasets. As the results of our analysis, we found the following: (1) The overall commit and post activities of the GH-SO users share some similarity. (2) The commit activities gradually change while the post activities drastically change over time. (3) The commit activities of the GH-SO users are broadly distributed while the post activities are narrowly distributed and the commit activity can be better predictor for post activity. (4) The commit activity of the GH-SO users tends to be performed prior post activity. We believe that our findings can contribute to finding the ways to better support commit and post activities of GitHub and Stack Overflow users.


Author(s):  
Simar Preet Singh ◽  
Rajesh Kumar ◽  
Anju Sharma ◽  
S. Raji Reddy ◽  
Priyanka Vashisht

Background: Fog computing paradigm has recently emerged and gained higher attention in present era of Internet of Things. The growth of large number of devices all around, leads to the situation of flow of packets everywhere on the Internet. To overcome this situation and to provide computations at network edge, fog computing is the need of present time that enhances traffic management and avoids critical situations of jam, congestion etc. Methods: For research purposes, there are many methods to implement the scenarios of fog computing i.e. real-time implementation, implementation using emulators, implementation using simulators etc. The present study aims to describe the various simulation and emulation tools for implementing fog computing scenarios. Results: Review shows that iFogSim is the simulator that most of the researchers use in their research work. Among emulators, EmuFog is being used at higher pace than other available emulators. This might be due to ease of implementation and user-friendly nature of these tools and language these tools are based upon. The use of such tools enhance better research experience and leads to improved quality of service parameters (like bandwidth, network, security etc.). Conclusion: There are many fog computing simulators/emulators based on many different platforms that uses different programming languages. The paper concludes that the two main simulation and emulation tools in the area of fog computing are iFogSim and EmuFog. Accessibility of these simulation/emulation tools enhance better research experience and leads to improved quality of service parameters along with the ease of their usage.


Author(s):  
Rupesh Wadher

Examination of ongoing pathology in patient’s body is quite essential for a physician to calculate the estimation the dose of drug. But examination method mentioned in Ayurveda is incomplete without using the present concept of Aturaparijnana Hetu. With the help of Aturaparijnana Hetu the traditional methods of person understanding (the Dashavidha Pariksha) become more accurate and powerful. Aturaparijnaana Hetu gives standard of a person. In this way, examination method acquires the foundation; designed for grading. In short, person’s residual strength can be documented. These article is intended to highlight the research work through survey study that how can a group is identify by their respective Desha and their role in Dashavidha Pariksha. Dehabala and Doshabala are assessing by this methods.


SAGE Open ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 215824402110241
Author(s):  
Ya-Ling Chiu ◽  
Yuan-Teng Hsu ◽  
Xiaoyu Mao ◽  
Jying-Nan Wang

When online retailers allow third-party sellers to place certain products on their platforms, these sellers become not only collaborators but also competitors. The purpose of this study is to compare the differences in price discounts between Third-Party Marketplace (3PM) sellers and Fulfilled by Walmart (FBW) sellers on Walmart.com over time. The results, based on data collected in the form of the daily prices of 54,162 products offered by Walmart during the holiday season, show that the average discount for 3PM sellers is significantly lower than that for FBW sellers. In addition, across product categories, FBW sellers had significantly higher average discounts than 3PM sellers in the electronics, housewares, and toys categories. Furthermore, the level of discount began to increase in early November and peaked around Christmas. Our findings may help retailers manage their dealings with these third-party sellers while also helping consumers to optimize their purchasing decisions.


2010 ◽  
Vol 19 (01) ◽  
pp. 65-99 ◽  
Author(s):  
MARC POULY

Computing inference from a given knowledgebase is one of the key competences of computer science. Therefore, numerous formalisms and specialized inference routines have been introduced and implemented for this task. Typical examples are Bayesian networks, constraint systems or different kinds of logic. It is known today that these formalisms can be unified under a common algebraic roof called valuation algebra. Based on this system, generic inference algorithms for the processing of arbitrary valuation algebras can be defined. Researchers benefit from this high level of abstraction to address open problems independently of the underlying formalism. It is therefore all the more astonishing that this theory did not find its way into concrete software projects. Indeed, all modern programming languages for example provide generic sorting procedures, but generic inference algorithms are still mythical creatures. NENOK breaks a new ground and offers an extensive library of generic inference tools based on the valuation algebra framework. All methods are implemented as distributed algorithms that process local and remote knowledgebases in a transparent manner. Besides its main purpose as software library, NENOK also provides a sophisticated graphical user interface to inspect the inference process and the involved graphical structures. This can be used for educational purposes but also as a fast prototyping architecture for inference formalisms.


2016 ◽  
Vol 2016 (1) ◽  
pp. 4-19 ◽  
Author(s):  
Andreas Kurtz ◽  
Hugo Gascon ◽  
Tobias Becker ◽  
Konrad Rieck ◽  
Felix Freiling

Abstract Recently, Apple removed access to various device hardware identifiers that were frequently misused by iOS third-party apps to track users. We are, therefore, now studying the extent to which users of smartphones can still be uniquely identified simply through their personalized device configurations. Using Apple’s iOS as an example, we show how a device fingerprint can be computed using 29 different configuration features. These features can be queried from arbitrary thirdparty apps via the official SDK. Experimental evaluations based on almost 13,000 fingerprints from approximately 8,000 different real-world devices show that (1) all fingerprints are unique and distinguishable; and (2) utilizing a supervised learning approach allows returning users or their devices to be recognized with a total accuracy of 97% over time


Author(s):  
Kyra B. Phillips ◽  
Kelly N. Byrne ◽  
Branden S. Kolarik ◽  
Audra K. Krake ◽  
Young C. Bui ◽  
...  

Since COVID-19 transmission accelerated in the United States in March 2020, guidelines have recommended that individuals wear masks and limit close contact by remaining at least six feet away from others, even while outdoors. Such behavior is important to help slow the spread of the global pandemic; however, it may require pedestrians to make critical decisions about entering a roadway in order to avoid others, potentially creating hazardous situations for both themselves and for drivers. In this survey study, we found that while overall patterns of self-reported pedestrian activity remained largely consistent over time, participants indicated increased willingness to enter active roadways when encountering unmasked pedestrians since the COVID-19 pandemic began. Participants also rated the risks of encountering unmasked pedestrians as greater than those associated with entering a street, though the perceived risk of passing an unmasked pedestrian on the sidewalk decreased over time.


2014 ◽  
Vol 19 (3) ◽  
pp. 53-64 ◽  
Author(s):  
Sarah Wilson

Maintaining a ‘critical reflexivity’ ( Heaphy 2008 ) or ‘investigative epistemology’ ( Mason 2007 ) in relation to the sedimented assumptions built up over the course of one's own research history and embedded in common research boundaries, is difficult. The type of secondary analysis discussed in this paper is not an easy or quick ‘fix’ to the important issue of how such assumptions can embed themselves over time in methods chosen and questions asked. Even though archived studies are often accompanied by relatively detailed metadata, finding relevant data and getting a grasp on a sample, is time-consuming. However, it is argued that close examination of rawer data than those presented in research reports from carefully chosen studies combining similar foci and epistemological approaches but with differently situated samples, can help. Here, this process highlighted assumptions underlying the habitual disciplinary locations and constructions of so-called ‘vulnerable’ as opposed to ‘ordinary’ samples, leading the author to scrutinise aspects of her previous research work in this light and providing important insights for the development of further projects.


2021 ◽  
Vol 11 (22) ◽  
pp. 10686
Author(s):  
Syeda Amna Sohail ◽  
Faiza Allah Bukhsh ◽  
Maurice van Keulen

Healthcare providers are legally bound to ensure the privacy preservation of healthcare metadata. Usually, privacy concerning research focuses on providing technical and inter-/intra-organizational solutions in a fragmented manner. In this wake, an overarching evaluation of the fundamental (technical, organizational, and third-party) privacy-preserving measures in healthcare metadata handling is missing. Thus, this research work provides a multilevel privacy assurance evaluation of privacy-preserving measures of the Dutch healthcare metadata landscape. The normative and empirical evaluation comprises the content analysis and process mining discovery and conformance checking techniques using real-world healthcare datasets. For clarity, we illustrate our evaluation findings using conceptual modeling frameworks, namely e3-value modeling and REA ontology. The conceptual modeling frameworks highlight the financial aspect of metadata share with a clear description of vital stakeholders, their mutual interactions, and respective exchange of information resources. The frameworks are further verified using experts’ opinions. Based on our empirical and normative evaluations, we provide the multilevel privacy assurance evaluation with a level of privacy increase and decrease. Furthermore, we verify that the privacy utility trade-off is crucial in shaping privacy increase/decrease because data utility in healthcare is vital for efficient, effective healthcare services and the financial facilitation of healthcare enterprises.


2021 ◽  
Vol 58 (3) ◽  
Author(s):  
Torbjörn Bildtgård ◽  
Marianne Winqvist ◽  
Peter Öberg

The increasing prevalence of ageing stepfamilies and the potential of stepchildren to act as a source of support for older parents have increased the interest in long-term intergenerational step relationships. Applying a life-course perspective combined with Simmel’s theorizing on social dynamics, this exploratory study aims to investigate the preconditions for cohesion in long-term intergenerational step relationships. The study is based on interviews with 13 older parents, aged 66–79, who have raised both biological children and stepchildren. Retrospective life-course interviews were used to capture the development of step relationships over time. Interviews were analysed following the principles of analytical induction. The results reveal four central third-party relationships that are important for cohesion in intergenerational step relationships over time, involving: (1) the intimate partner; (2) the non-residential parent; (3) the bridge child; and (4) the stepchild-in-law. The findings have led to the conclusion that if we are to understand the unique conditions for cohesion in long-term intergenerational step relationships, we cannot simply compare biological parent–child dyads with step dyads, because the step relationship is essentially a mediated relationship.


Sign in / Sign up

Export Citation Format

Share Document