Who Cares About My Feature Request?

Author(s):  
Lukas Heppler ◽  
Remo Eckert ◽  
Matthias Stuermer
Keyword(s):  
2019 ◽  
Vol 38 (1) ◽  
pp. 5-7
Author(s):  
Kevin M Ford

User stories and use cases help focus any development project on those who stand to benefit, i.e. the project’s stakeholders, and can guard simultaneously against insufficient planning and software bloat. And the concepts, though most often thought of with respect to large-scale projects, apply in all circumstances, from the smallest feature request to an existing system to the redesign of a complex system.


Author(s):  
I Made Mika Parwita ◽  
Daniel Siahaan

The app reviews are useful for app developers because they contain valuable information, e.g. bug, feature request, user experience, and rating. This information can be used to better understand user needs and application defects during software maintenance and evolution phase. The increasing number of reviews causes problems in the analysis process for developers. Reviews in textual form are difficult to understand, this is due to the difficulty of considering semantic between sentences. Moreover, manual checking is time-consuming, requires a lot of effort, and costly for manual analysis. Previous research shows that the collection of the review contains non-informative reviews because they do not have valuable information. Non-informative reviews considered as noise and should be eliminated especially for classification process. Moreover, semantic problems between sentences are not considered for the reviews classification. The purpose of this research is to classify user reviews into three classes, i.e. bug, feature request, and non-informative reviews automatically. User reviews are converted into vectors using word embedding to handle the semantic problem. The vectors are used as input into the first classifier that classifies informative and non-informative reviews. The results from the first classifier, that is informative reviews, then reclassified using the second classifier to determine its category, e.g. bug report or feature request. The experiment using 306,849 sentences of reviews crawled from Google Play and F-Droid. The experiment result shows that the proposed model is able to classify mobile application review by produces best accuracy of 0.79, precision of 0.77, recall of 0.87, and F-Measure of 0.81.  


Author(s):  
Thorsten Merten ◽  
Matus Falis ◽  
Paul Hubner ◽  
Thomas Quirchmayr ◽  
Simone Bursner ◽  
...  

2012 ◽  
Vol 17 (2) ◽  
pp. 117-132 ◽  
Author(s):  
Camilo Fitzgerald ◽  
Emmanuel Letier ◽  
Anthony Finkelstein

2017 ◽  
Vol 10 (2) ◽  
pp. 41
Author(s):  
Raditya Maulana Anuraga ◽  
Ema Utami ◽  
Hanif Al Fatta

Listeno is the first application audio books in Indonesia so that the users can get the book in audio form like listen to music, Listeno have problems in a feature request Listeno offline mode that have not been released, a security problem mp3 files that must be considered, and the target Listeno not yet reached 100,000 active users. This research has the objective to evaluate user satisfaction to Audio Books with research method approach, Nielsen. The analysis in this study using Importance Performance Analysis (IPA) is combined with the index of User Satisfaction (IKP) based on the indicators used are: Benefit (Usefulness), Utility (Utility), Usability (Usability), easy to understand (Learnability), Efficient (efficiency) , Easy to remember (Memorability), Error (Error), and satisfaction (satisfaction). The results showed Applications User Satisfaction Audio books are quite satisfied with the results of the calculation IKP 69.58%..


Author(s):  
Evgeniy Meyke

Complex projects that collect, curate and analyse biodiversity data are often presented with the challenge of accommodating diverse data types, various curation and output workflows, and evolving project logistics that require rapid changes in the applications and data structures. At the same time, sustainability concerns and maintenance overheads pose a risk to the long term viability of such projects. We advocate the use of flexible, multiplatform tools that adapt to operational, day-to-day challenges while providing a robust, cost efficient, and maintainable framework that serves the needs data collectors, managers and users. EarthCape is a highly versatile platform for managing biodiversity research and collections data, associated molecular laboratory data (Fig. 1), multimedia, structured ecological surveys and monitoring schemes, and more. The platform includes a fully functional Windows client as well as a web application. The data are stored in the cloud or on-premises and can be accessed by users with various access and editing rights. Ease of customization (making changes to user interface and functionality) is critical for most environments that deal with operational research processes. For active researchers and curators, there is rarely time to wait for a cycle of development that follows a change or feature request. In EarthCape, most of the changes to the default setup can be implemented by the end users with minimum effort and require no programming skills. High flexibility and a range of customisation options is complemented with mapping to Darwin Core standard and integration with GBIF, Geolocate, Genbank, and Biodiversity Heritage Library APIs. The system is currently used daily for rapid data entry, digitization and sample tracking, by such organisations as Imperial College, University of Cambridge, University of Helsinki, University of Oxford. Being an operational data entry and retrieval tool, EarthCape sits at the bottom of Virtual Research Environments ecosystem. It is not a software or platform to build data repositories, but rather a very focused tool falling under "back office" software category. Routine label printing, laboratory notebook maintenance, rapid data entry set up, or any other of relatively loaded user interfaces make use of any industry standard relational database back end. This opens a wide scope for IT designers to implement desired integrations within their institutional infrastructure. APIs and developer access to core EarthCape libraries to build own applications and modules are under development. Basic data visualisation (charts, pivots, dashboards), mapping (full featured desktop GIS module), data outputs (report and label designer) are tailored not only to research analyses, but also for managing logistics and communication when working on (data) papers. The presentation will focus on the software platform featuring most prominent use cases from two areas: ecological research (managing complex network data digitization project) and museum collections management (herbarium and insect collections).


Author(s):  
MPS Bhatia ◽  
Akshi Kumar ◽  
Rohit Beniwal

Background: The App Stores, for example, Google Play and Apple Play Store provide a platform that allows users to provide feedback on the apps in the form of reviews. An app review typically includes star rating followed by a comment. Recent studies have shown that these reviews possess a vital source of information that can be used by the app developers and the vendors for improving the future versions of an app. However, in most of the cases, these reviews are present in the unstructured form and extracting useful information from them requires a great effort. Objective: This article provides an optimized classification approach that automatically classifies the reviews into a bug report, feature request, and shortcoming & improvement request relevant to Requirement Engineering. Method: Our methodology merges three techniques, namely (1) Text Analysis, (2) Natural Language Processing, and (3) Sentiment Analysis to extract features set, which is then used to automatically classify app reviews into their relevant categories. Results: Result shows that we achieved best results with precision of 67.8 % and recall of 41.5 % with Logistic Regression Machine Learning technique, which we further optimized with PSO nature-inspired algorithm, i.e., with Logistic Regression + PSO, thus, resulting in a precision of 74.4 % and recall of 45.0 %. Conclusion: This optimized automatic classification improves the Requirement Engineering where developer straightforwardly knows what to improve further in the concerned app.


2018 ◽  
Vol 10 (1) ◽  
pp. 34-40 ◽  
Author(s):  
Andre Rusli

Requirements engineering is a series of activities which aims to elicit, analyze, evaluate, and document the requirements of a system that is being developed. The activities do not stop after the product is deployed but continues as the users use the product and provide feedbacks to the system and matter how decent the functionalities of a product are, if it cannot address the correct problem and/or opportunities of the stakeholders or users, the product cannot be considered useful. That being said, not all stakeholders are willing to participate in providing useful feedbacks to improve the product after deployment, for many reasons. Gamification is considered as an opportunity that can be utilized to improve the motivation of user to use a product by implementing game design elements into an existing software product, thus increasing user participation to contribute in providing useful feedbacks and evolving requirements of a software product. This research proposes a model to support engineers in motivating users to provide feedbacks using gamification and also Naïve Bayes Classifier to classify user feedbacks into categories needed by the developer to extract the requirements stated in the feedback, such as bug reports, feature request, user experiences, etc. Kata Kunci—requirements engineering, gamification, Naïve Bayes, user feedback


Sign in / Sign up

Export Citation Format

Share Document