Bug Localization in Model-Based Systems in the Wild

2022 ◽  
Vol 31 (1) ◽  
pp. 1-32
Author(s):  
Lorena Arcega ◽  
Jaime Font Arcega ◽  
Øystein Haugen ◽  
Carlos Cetina

The companies that have adopted the Model-Driven Engineering (MDE) paradigm have the advantage of working at a high level of abstraction. Nevertheless, they have the disadvantage of the lack of tools available to perform bug localization at the model level. In addition, in an MDE context, a bug can be related to different MDE artefacts, such as design-time models, model transformations, or run-time models. Starting the bug localization in the wrong place or with the wrong tool can lead to a result that is unsatisfactory. We evaluate how to apply the existing model-based approaches in order to mitigate the effect of starting the localization in the wrong place. We also take into account that software engineers can refine the results at different stages. In our evaluation, we compare different combinations of the application of bug localization approaches and human refinement. The combination of our approaches plus manual refinement obtains the best results. We performed a statistical analysis to provide evidence of the significance of the results. The conclusions obtained from this evaluation are: humans have to be involved at the right time in the process (or results can even get worse), and artefact-independence can be achieved without worsening the results.

Author(s):  
Pablo Nicolás Díaz Bilotto ◽  
Liliana Favre

Software developers face several challenges in deploying mobile applications. One of them is the high cost and technical complexity of targeting development to a wide spectrum of platforms. The chapter proposes to combine techniques based on MDA (Model Driven Architecture) with the HaXe language. The outstanding ideas behind MDA are separating the specification of the system functionality from its implementation on specific platforms, managing the software evolution, increasing the degree of automation of model transformations, and achieving interoperability with multiple platforms. On the other hand, HaXe is a very modern high level programming language that allows us to generate mobile applications that target all major mobile platforms. The main contributions of this chapter are the definition of a HaXe metamodel, the specification of a model-to-model transformation between Java and HaXe and, the definition of an MDA migration process from Java to mobile platforms.


Author(s):  
Jeff Gray ◽  
Sandeep Neema ◽  
Jing Zhang ◽  
Yuehua Lin ◽  
Ted Bapty ◽  
...  

The development of distributed real-time and embedded (DRE) systems is often challenging due to conflicting quality-of-service (QoS) constraints that must be explored as trade-offs among a series of alternative design decisions. The ability to model a set of possible design alternatives—and to analyze and simulate the execution of the representative model—helps derive the correct set of QoS parameters needed to satisfy DRE system requirements. QoS adaptation is accomplished via rules that specify how to modify application or middleware behavior in response to changes in resource availability. This chapter presents a model-driven approach for generating QoS adaptation rules in DRE systems. This approach creates high-level graphical models representing QoS adaptation policies. The models are constructed using a domain-specific modeling language—the adaptive quality modeling language (AQML)—which assists in separating common concerns of a DRE system via different modeling views. The chapter motivates the need for model transformations to address crosscutting and scalability concerns within models. In addition, a case study is presented based on bandwidth adaptation in video streaming of unmanned aerial vehicles.


Author(s):  
María-Cruz Valiente ◽  
Cristina Vicente-Chicote ◽  
Daniel Rodríguez

Currently, few projects applying a Model-Driven Engineering (MDE) approach start from high-level requirements models defined exclusively in terms of domain knowledge and business logic. Ontology Engineering (OE) aims to formalize and make explicit the knowledge related to a particular domain. In this vein, this paper presents a modeling approach, formalized in ontological terms, for defining high-level requirements models of software systems that provide support for the implementation of Information Technology Service Management Systems (ITSMSs). This approach allows for: (1) formalizing the knowledge associated to the ITSM processes contained in an ITSMS; (2) modeling the semantics of the activities associated to these processes in terms of workflows; (3) automatically generating the high-level requirements models of the workflow-based software systems needed to support (part of) the ITSM processes; and (4) from the latter, obtaining lower-level models (and eventually code) by means of automated model transformations. A real case study describing the use of this proposal to model an Incident Management System is also included to demonstrate the feasibility and the benefits of the proposed approach.


Author(s):  
María-Cruz Valiente ◽  
Cristina Vicente-Chicote ◽  
Daniel Rodríguez

Currently, few projects applying a Model-Driven Engineering (MDE) approach start from high-level requirements models defined exclusively in terms of domain knowledge and business logic. Ontology Engineering (OE) aims to formalize and make explicit the knowledge related to a particular domain. In this vein, this paper presents a modeling approach, formalized in ontological terms, for defining high-level requirements models of software systems that provide support for the implementation of Information Technology Service Management Systems (ITSMSs). This approach allows for: (1) formalizing the knowledge associated to the ITSM processes contained in an ITSMS; (2) modeling the semantics of the activities associated to these processes in terms of workflows; (3) automatically generating the high-level requirements models of the workflow-based software systems needed to support (part of) the ITSM processes; and (4) from the latter, obtaining lower-level models (and eventually code) by means of automated model transformations. A real case study describing the use of this proposal to model an Incident Management System is also included to demonstrate the feasibility and the benefits of the proposed approach.


2020 ◽  
Vol 17 (4A) ◽  
pp. 579-587
Author(s):  
Gullelala Jadoon ◽  
Muhammad Shafi ◽  
Sadaqat Jan

The developmental paradigm in the software engineering industry has transformed from a programming-oriented approach to model-oriented development. At present, model-based development is becoming an emerging method for enterprises for constructing software systems and services most proficiently. In Capability Maturity Model Integration (CMMI) Level 2, i.e., Managed, we need to sustain the bi-directional trace of the transformed models for the administration of user requirements and demands. This goal is achieved by the organization after applying the particular practices suggested by CMMI level 2 process area of Requirements Management (RM). It is very challenging for software developers and testers to maintain trace, particularly during the evaluation and upgrading phases of development. In our previous research work, we proposed a traceability framework for model-based development of applications for software enterprises. This work is the extension of our previously presented research work in which we have anticipated the meta-model transformations according to the Software Development Life Cycle (SDLC). These meta-models are capable of maintaining the trace information through relations. The proposed technique is also verified using a generalized illustration of an application. This transformation practice will give a foundation to software designers to maintain traceability links in model-driven development


2019 ◽  
Vol 65 (1) ◽  
pp. 27-41
Author(s):  
Yelena Artamonova

Lung cancer is the leading cause of mortality from malignant tumors all over the world. Since most patients at the time of diagnosis already have stage III-IV of the disease, the search for new effective treatment strategies for advanced NSCLC is the most important problem of modern oncology. The results of the study of the anti-PD1 monoclonal antibody pembrolizumab were a real breakthrough in the treatment of NSCLC. In the KEYN0TE-001 study, the expression of PD-L1 on tumor cells was validated as a predictive biomarker of the drug's efficiency. Pembrolizumab demonstrated the possibility of achieving long-term objective responses, and a 4-year 0S with all histological types in the subgroup of pre-treated patients with PD-L1 expression> 50% was 24.8% and 15.6% in the PD-L1> 1% group. In a phase 2/3 randomized study KEYN0TE-10 in the 2nd line treatment of NSCLC with PD-L1 expression > 1% pembrolizumab significantly increased life expectancy compared to docetaxel and confirmed the possibility of longterm duration of objective responses, even after cessation of treatment. Then the focus of research shifted to the 1st line of treatment. About 30% of patients with NSCLC have a high level of PD-L1 expression on tumor cells and demonstrate the most impressive response to pembrolizumab therapy. A randomized phase 3 study KEYN0TE-024 compared the effectiveness of pembrolizumab monotherapy with a standard platinum combination in patients with advanced NSCLC with a high level of PD-L1 expression without EGFR mutations or ALK translocation. Compared with the platinum doublet the administration of pembrolizumab significantly increased all estimated parameters, including the median of progression-free survival (mPFS was 10.3 months versus 6 months; HR = 0.50; 95% CI 0.37-0.68, p < 0.001), the objective response rate (ORR 44.8% versus 27.8%), duration of response (in the pembrolizumab arm the median was not reached, in the chemotherapy (CT) group - 6.3 months). Despite the approved crossover, the use of pembrolizumab in the 1st line of treatment more than doubled the life expectancy of NSCLC patients with high PD-L1 expression as compared to CT: the median overall survival (OS) was 30.0 months versus 14.2 months (HR = 0.63, p = 0.002), 1-year OS 70.3% versus 54.8%; 2-year OS - 51.5% versus 34.5%. The remaining population to study were untreated patients with any level of PD-L1 expression. A randomized phase 3 study KEYNOTE-189 evaluated the effectiveness of adding pembrolizumab to the platinum combination in the 1st line treatment of non-squamous NSCLC without EGFR and ALK mutations with any PD-L1 expression. The addition of pembrolizumab to the standard 1st line CT significantly increased all estimated efficacy indicators including OS, PFS and ORR. After a median follow-up of 10.5 months the median OS in the pembrolizumab combination group was not reached and in CT group was 11.3 months. The estimated 12-months survival was 69.2% and 49.4% respectively (HR = 0.49; 95% CI 0.38-0,64; p <0.001). The median PFS was 8.8 months versus 4.9 months, alive 1 year without progression 34.1% and 17.3% of patients respectively (HR = 0.52; p <0.001). The ORR in the group with pembrolizumab reached 47.6% versus 18.9% in CT group, moreover the tumor regressions were much longer. Finally a randomized 3-phase study KEYN0TE-407 evaluated the effectiveness of adding pembrolizumab to 1st-line CT of NSCLC with squamous histology with any PD-L1 expression. As the first analysis showed, the addition of permboli-zumab significantly increased OS of patients with squamous NSCLC, median OS 15.9 months versus 11.3 months in the groups of pembrolizumab + CT and placebo + CT respectively (HR = 0.64; 95% CI 0,49-0.95; p = 0.0006), median PFS 6.4 months and 4.8 months respectively (HR = 0.56; 95% CI 0.450.70; p <0, 0001) and OrR 57.9% versus 38.4%, the median response duration 7.7 months versus 4.8 months. Thus, the convincing advantages of using pembrolizumab in 1st line therapy were demonstrated in 3 randomized phase 3 studies: in monotherapy of NSCLC of any histological subtype with high PD-L1 expression, and in combination with CT in squamous and non-squamous hystologies regardless of the level of PD-L1 expression.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2085
Author(s):  
Xue-Bo Jin ◽  
Ruben Jonhson Robert RobertJeremiah ◽  
Ting-Li Su ◽  
Yu-Ting Bai ◽  
Jian-Lei Kong

State estimation is widely used in various automated systems, including IoT systems, unmanned systems, robots, etc. In traditional state estimation, measurement data are instantaneous and processed in real time. With modern systems’ development, sensors can obtain more and more signals and store them. Therefore, how to use these measurement big data to improve the performance of state estimation has become a hot research issue in this field. This paper reviews the development of state estimation and future development trends. First, we review the model-based state estimation methods, including the Kalman filter, such as the extended Kalman filter (EKF), unscented Kalman filter (UKF), cubature Kalman filter (CKF), etc. Particle filters and Gaussian mixture filters that can handle mixed Gaussian noise are discussed, too. These methods have high requirements for models, while it is not easy to obtain accurate system models in practice. The emergence of robust filters, the interacting multiple model (IMM), and adaptive filters are also mentioned here. Secondly, the current research status of data-driven state estimation methods is introduced based on network learning. Finally, the main research results for hybrid filters obtained in recent years are summarized and discussed, which combine model-based methods and data-driven methods. This paper is based on state estimation research results and provides a more detailed overview of model-driven, data-driven, and hybrid-driven approaches. The main algorithm of each method is provided so that beginners can have a clearer understanding. Additionally, it discusses the future development trends for researchers in state estimation.


Genes ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 618
Author(s):  
Yue Jin ◽  
Shihao Li ◽  
Yang Yu ◽  
Chengsong Zhang ◽  
Xiaojun Zhang ◽  
...  

A mutant of the ridgetail white prawn, which exhibited rare orange-red body color with a higher level of free astaxanthin (ASTX) concentration than that in the wild-type prawn, was obtained in our lab. In order to understand the underlying mechanism for the existence of a high level of free astaxanthin, transcriptome analysis was performed to identify the differentially expressed genes (DEGs) between the mutant and wild-type prawns. A total of 78,224 unigenes were obtained, and 1863 were identified as DEGs, in which 902 unigenes showed higher expression levels, while 961 unigenes presented lower expression levels in the mutant in comparison with the wild-type prawns. Based on Gene Ontology analysis and Kyoto Encyclopedia of Genes and Genomes analysis, as well as further investigation of annotated DEGs, we found that the biological processes related to astaxanthin binding, transport, and metabolism presented significant differences between the mutant and the wild-type prawns. Some genes related to these processes, including crustacyanin, apolipoprotein D (ApoD), cathepsin, and cuticle proteins, were identified as DEGs between the two types of prawns. These data may provide important information for us to understand the molecular mechanism of the existence of a high level of free astaxanthin in the prawn.


Author(s):  
Aida Mekhoukhe ◽  
Nacer Mohellebi ◽  
Tayeb Mohellebi ◽  
Leila Deflaoui-Abdelfettah ◽  
Sonia Medouni-Adrar ◽  
...  

OBJECTIVE: the present work proposed to extract Locust Bean Gum (LBG) from Algerian carob fruits, evaluate physicochemical and rheological properties (solubility). It aimed also to develop different formulations of strawberry jams with a mixture of LBG and pectin in order to obtain a product with a high sensory acceptance. METHODS: the physicochemical characteristics of LBG were assessed. The impact of temperature on solubility was also studied. The physical and the sensory profile and acceptance of five Jams were evaluated. RESULTS: composition results revealed that LBG presented a high level of carbohydrate but low concentrations of fat and ash. The LBG was partially cold-water-soluble (∼62% at 25°C) and needed heating to reach a higher solubility value (∼89% at 80 °C). Overall, the sensorial acceptances decreased in jams J3 which was formulated with 100% pectin and commercial one (J5). The external preference map explained that most consumers were located to the right side of the map providing evidence that most samples appreciated were J4 and J2 (rate of 80–100%). CONCLUSION: In this investigation, the LBG was used successfully in the strawberry jam’s formulation.


Sign in / Sign up

Export Citation Format

Share Document