support software
Recently Published Documents


TOTAL DOCUMENTS

526
(FIVE YEARS 101)

H-INDEX

21
(FIVE YEARS 3)

2021 ◽  
Vol 2 (4) ◽  
pp. 312-318
Author(s):  
Stephanie J. Yaung ◽  
Adeline Pek

Given the increase in genomic testing in routine clinical use, there is a growing need for digital technology solutions to assist pathologists, oncologists, and researchers in translating variant calls into actionable knowledge to personalize patient management plans. In this article, we discuss the challenges facing molecular geneticists and medical oncologists in working with test results from next-generation sequencing for somatic oncology, and propose key considerations for implementing a decision support software to aid the interpretation of clinically important variants. In addition, we review results from an example decision support software, NAVIFY Mutation Profiler. NAVIFY Mutation Profiler is a cloud-based software that provides curation, annotation, interpretation, and reporting of somatic variants identified by next-generation sequencing. The software reports a tiered classification based on consensus recommendations from AMP, ASCO, CAP, and ACMG. Studies with NAVIFY Mutation Profiler demonstrated that the software provided timely updates and accurate curation, as well as interpretation of variant combinations, demonstrating that decision support tools can help advance implementation of precision oncology.


2021 ◽  
Author(s):  
Leticia Carvalho Passos ◽  
Lucas Viana ◽  
Edson Oliveira ◽  
Tayana Conte

Author(s):  
Danielle S. Bitterman ◽  
Philip Selesnick ◽  
Jeremy Bredfeldt ◽  
Christopher L. Williams ◽  
Christian Guthier ◽  
...  

2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-31
Author(s):  
Alexandru Dura ◽  
Christoph Reichenbach ◽  
Emma Söderberg

Static checker frameworks support software developers by automatically discovering bugs that fit general-purpose bug patterns. These frameworks ship with hundreds of detectors for such patterns and allow developers to add custom detectors for their own projects. However, existing frameworks generally encode detectors in imperative specifications, with extensive details of not only what to detect but also how . These details complicate detector maintenance and evolution, and also interfere with the framework’s ability to change how detection is done, for instance, to make the detectors incremental. In this paper, we present JavaDL, a Datalog-based declarative specification language for bug pattern detection in Java code. JavaDL seamlessly supports both exhaustive and incremental evaluation from the same detector specification. This specification allows developers to describe local detector components via syntactic pattern matching , and nonlocal (e.g., interprocedural) reasoning via Datalog-style logical rules . We compare our approach against the well-established SpotBugs and Error Prone tools by re-implementing several of their detectors in JavaDL. We find that our implementations are substantially smaller and similarly effective at detecting bugs on the Defects4J benchmark suite, and run with competitive runtime performance. In our experiments, neither incremental nor exhaustive analysis can consistently outperform the other, which highlights the value of our ability to transparently switch execution modes. We argue that our approach showcases the potential of clear-box static checker frameworks that constrain the bug detector specification language to enable the framework to adapt and enhance the detectors.


2021 ◽  
Vol 12 ◽  
Author(s):  
Mitchell A. Frankel ◽  
Mark J. Lehmkuhle ◽  
Mark C. Spitz ◽  
Blake J. Newman ◽  
Sindhu V. Richards ◽  
...  

Epitel has developed Epilog, a miniature, wireless, wearable electroencephalography (EEG) sensor. Four Epilog sensors are combined as part of Epitel's Remote EEG Monitoring platform (REMI) to create 10 channels of EEG for remote patient monitoring. REMI is designed to provide comprehensive spatial EEG recordings that can be administered by non-specialized medical personnel in any medical center. The purpose of this study was to determine how accurate epileptologists are at remotely reviewing Epilog sensor EEG in the 10-channel “REMI montage,” with and without seizure detection support software. Three board certified epileptologists reviewed the REMI montage from 20 subjects who wore four Epilog sensors for up to 5 days alongside traditional video-EEG in the EMU, 10 of whom experienced a total of 24 focal-onset electrographic seizures and 10 of whom experienced no seizures or epileptiform activity. Epileptologists randomly reviewed the same datasets with and without clinical decision support annotations from an automated seizure detection algorithm tuned to be highly sensitive. Blinded consensus review of unannotated Epilog EEG in the REMI montage detected people who were experiencing electrographic seizure activity with 90% sensitivity and 90% specificity. Consensus detection of individual focal onset seizures resulted in a mean sensitivity of 61%, precision of 80%, and false detection rate (FDR) of 0.002 false positives per hour (FP/h) of data. With algorithm seizure detection annotations, the consensus review mean sensitivity improved to 68% with a slight increase in FDR (0.005 FP/h). As seizure detection software, the automated algorithm detected people who were experiencing electrographic seizure activity with 100% sensitivity and 70% specificity, and detected individual focal onset seizures with a mean sensitivity of 90% and mean false alarm rate of 0.087 FP/h. This is the first study showing epileptologists' ability to blindly review EEG from four Epilog sensors in the REMI montage, and the results demonstrate the clinical potential to accurately identify patients experiencing electrographic seizures. Additionally, the automated algorithm shows promise as clinical decision support software to detect discrete electrographic seizures in individual records as accurately as FDA-cleared predicates.


2021 ◽  
Vol 2061 (1) ◽  
pp. 012133
Author(s):  
A M Saykin ◽  
S E Buznikov

Abstract The relevance of the issue of development of efficient motion control systems for highly automated vehicles capable to successfully compete with foreign systems of similar purpose is defined by importance of the issue of creation of competitive high-tech products under modern market economy conditions. The research objective was to scientifically justify the building principles for motion control systems for highly automated vehicles providing directed search for solution options for the issue of multi-criterion optimization in the software and hardware space. The research involved methods of system analysis and modern control theory. The research result is a set or complex of building principles for motion control systems for highly automated vehicles providing minimization of hardware environment while keeping observability of all vehicle state coordinates significant for safe control and their dynamic boundaries, as well as ensuring controllability via the channels of traction, braking and direction. The conceptual core or base of such integrated intelligent control systems is mathematical and programming support (software) of indirect measurements of the parameters of motion and control of traction, brakes and direction adapting to the vehicle state and environment changes.


2021 ◽  
Author(s):  
Emanuel Dantas ◽  
Ademar Sousa Neto ◽  
Mirko Perkusich ◽  
Hyggo Almeida ◽  
Angelo Perkusich

Risk management is essential in software project management. It includes activities such as identifying, measuring and monitoring risks. The literature presents different approaches to support software risk management. In particular, the researchers popularly used Bayesian Networks because they can be learned from data or elicited from domain experts. Even though the literature presents many Bayesian networks (BN) for software risk management, none focus on technological risk factors. Given this, this paper presents a BN for managing risks of software projects and the results of a static validation performed through a focus group with eight practitioners. As a result, the practitioners agreed that our proposed to manage technological risks of software projects using BN is valuable and easy to use. Given the successful results, we concluded that the proposed solution is promising.


2021 ◽  
Author(s):  
Yury Alencar Lima ◽  
Elder de Macedo Rodrigues ◽  
Fabio Paulo Basso ◽  
Rafael A. P. Oliveira

Software testing automation is one of the most challenging activities in Software Engineering scenarios. Moden-Based Testing (MBT) is a feasible strategy to alleviate efforts on automating testing activities. Trough a model that specifies the behavior of the Software Under Testing (SUT), MBT approaches are useful strategies to generate test cases and run them. However, some domains such as, web applications require extra efforts on applying MBT approaches. Due to this, in this study we propose and validate Teasy a Domain Specification Language (DSL) that makes MBT feasible for web application. Through the conduction of a Proof-of-Concept on testing a real-world web application, we noticed Teasy has potential to evolve to effectively support software development environments. Using a real-world application and projects with manually seeded faults, Teasy testing scenarios have detected 78,57% of the functional inconsistencies.


Information ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 396
Author(s):  
Jhemeson Silva Mota ◽  
Heloise Acco Tives ◽  
Edna Dias Canedo

Despite efforts to define productivity, there is no consensus in the software industry regarding what the term productivity means and, instead of having only one metric or factor that describes productivity, it is defined by a set of aspects. Our objective is to develop a tool that supports the productivity measurement of software development teams according to the factors found in the literature. We divided these factors into four groups: People, Product, Organization, and Open Source Software Projects. We developed a web system containing the factors that influence productivity identified in this work, called Productive, to support software development teams in measuring their productivity. After developed the tool, we monitored its use over eight weeks with two small software development teams. From the results, we found that software development companies can use the system to support monitoring team productivity. The results also point to an improvement in productivity while using the system, and a survey applied to users demonstrates the users’ positive perception regarding the results obtained. In future work, we will monitor the use of the tool and investigate the users’ perceptions in other project contexts.


2021 ◽  
Vol 26 (6) ◽  
Author(s):  
Camila Costa Silva ◽  
Matthias Galster ◽  
Fabian Gilson

AbstractTopic modeling using models such as Latent Dirichlet Allocation (LDA) is a text mining technique to extract human-readable semantic “topics” (i.e., word clusters) from a corpus of textual documents. In software engineering, topic modeling has been used to analyze textual data in empirical studies (e.g., to find out what developers talk about online), but also to build new techniques to support software engineering tasks (e.g., to support source code comprehension). Topic modeling needs to be applied carefully (e.g., depending on the type of textual data analyzed and modeling parameters). Our study aims at describing how topic modeling has been applied in software engineering research with a focus on four aspects: (1) which topic models and modeling techniques have been applied, (2) which textual inputs have been used for topic modeling, (3) how textual data was “prepared” (i.e., pre-processed) for topic modeling, and (4) how generated topics (i.e., word clusters) were named to give them a human-understandable meaning. We analyzed topic modeling as applied in 111 papers from ten highly-ranked software engineering venues (five journals and five conferences) published between 2009 and 2020. We found that (1) LDA and LDA-based techniques are the most frequent topic modeling techniques, (2) developer communication and bug reports have been modelled most, (3) data pre-processing and modeling parameters vary quite a bit and are often vaguely reported, and (4) manual topic naming (such as deducting names based on frequent words in a topic) is common.


Sign in / Sign up

Export Citation Format

Share Document