software models
Recently Published Documents


TOTAL DOCUMENTS

309
(FIVE YEARS 86)

H-INDEX

15
(FIVE YEARS 2)

2022 ◽  
Vol 14 (1) ◽  
pp. 551
Author(s):  
Jakub Swacha

Information and Communication Technologies (ICTs) play a double role in the strife for sustainable development goals, as both an enabler of green solutions and a cause of excessive consumption. While the primary focus of sustainability-related research is on the hardware aspect of ICT, its software aspect also deserves attention. In order for the notion of green and sustainable software to become widespread among practitioners, models are needed, both to be used as a reference on how to develop sustainable software, and to check whether given software or its development process is sustainable. In this paper, we present the results of a scoping review of literature on sustainable software models, based on 41 works extracted from an initial set of 178 query results from four bibliographic data providers. The relevant literature is mapped using five categories (model scope, purpose, covered sustainability aspects, verification or validation, and the economic category of the country of research), allowing us to identify recent trends and research gaps, which can be addressed in future work.


10.6036/10243 ◽  
2022 ◽  
Vol 97 (1) ◽  
pp. 18-22
Author(s):  
MIREN ILLARRAMENDI REZABAL ◽  
ASIER IRIARTE ◽  
AITOR ARRIETA AGUERRI, ◽  
GOIURIA SAGARDUI MENDIETA ◽  
FELIX LARRINAGA BARRENECHEA

The digital industry requires increasingly complex and reliable software systems. They need to control and make critical decisions at runtime. As a consequence, the verification and validation of these systems has become a major research challenge. At design and development time, model testing techniques are used while run-time verification aims at verifying that a system satisfies a given property. The latter technique complements the former. The solution presented in this paper targets embedded systems whose software components are designed by state machines defined by Unified Modelling Language (UML). The CRESCO (C++ REflective State-Machines based observable software COmponents) platform generates software components that provide internal information at runtime and the verifier uses this information to check system-level reliability/safety contracts. The verifier detects when a system contract is violated and initiates a safeState process to prevent dangerous scenarios. These contracts are defined by internal information from the software components that make up the system. Thus, as demonstrated in the tested experiment, the robustness of the system is increased. All software components (controllers), such as the verifier, have been deployed as services (producers/consumers) of the Arrowhead IoT platform: the controllers are deployed on local Arrowhead platforms (Edge) and the verifier (Safety Manager) is deployed on an Arrowhead platform (Cloud) that will consume controllers on the Edge and ensure the proper functioning of the plant controllers. Keywords: run-time monitoring, robustness, software components, contracts, software models, state machines


Author(s):  
Sandeep Haritwal

Abstract: In India, every industry has its own importance to make the country shift towards its future goal. The construction industry plays a very significant role with the introduction of high-rise structures that has been increasing regularly. Beside this, the structure should be strong enough that each element should be economic and strong. The criteria of using optimum size approach for reduction of axial forces in column in multi storied building under seismic zone is a new idea. It reduces the size of beams and columns at the different levels of the building. On other hand, the structural weight should be minimized when the self-weight of the same will be reduced and proved to be an economic structure. In this project a G+13 Storey structure is analyzed using six different cases named as AFR Case A to AFR Case F assumed to be situated in seismic Zone III. The plinth area is in use as 625 m2 and all the cases have compared with each parameter. The project concluded that efficient Case is AFR Case C on comparing 6 maximum axial force reduction cases that ultimately reduce the overall cost of the project. Keywords: Axial forces, Columns, Strength, Durability, Software Models, High-Rise Structures


2021 ◽  
Author(s):  
Afef Salhi ◽  
Fahmi Ghozzi ◽  
Ahmed Fakhfakh

Co-design embedded system are very important step in digital vehicle and airplane. The multicore and multiprocessor SoC (MPSoC) started a new computing era. It is becoming increasingly used because it can provide designers much more opportunities to meet specific performances. Designing embedded systems includes two main phases: (i) HW/SW Partitioning performed from high-level (eclipse C/C++ or python (machine learning and deep learning)) functional and architecture models (with virtual prototype and real prototype). And (ii) Software Design performed with significantly more detailed models with scheduling and partitioning tasks algorithm DAG Directed Acyclic Graph and GGEN Generation Graph Estimation Nodes (there are automatic DAG algorithm). Partitioning decisions are made according to performance assumptions that should be validated on the more refined software models for ME block and GGEN algorithm. In this paper, we focus to optimize a execution time and amelioration for quality of video with a scheduling and partitioning tasks in video codec. We show how they can be modeled the video sequence test with the size of video in height and width (three models of scheduling tasks in four processor). This modeling with DAG and GGEN are partitioning at different platform in OVP (partitioning, SW design). We can know the optimization of consumption energy and execution time in SoC and MPSoC platform.


2021 ◽  
Vol 5 (4) ◽  
pp. 10-16
Author(s):  
Volodymyr Gorokhovatskyi ◽  
Nataliia Vlasenko

The subject of the research is the methods of image classification on a set of key point descriptors in computer vision systems. The goal is to improve the performance of structural classification methods by introducing indexed hash structures on the set of the dataset reference images descriptors and a consistent chain combination of several stages of data analysis in the classification process. Applied methods: BRISK detector and descriptors, data hashing tools, search methods in large data arrays, metric models for the vector relevance estimation, software modeling. The obtained results: developed an effective method of image classification based on the introduction of high-speed search using indexed hash structures, that speeds up the calculation dozens of times; the gain in computing time increases with an increase of the number of reference images and descriptors in descriptions; the peculiarity of the classifier is that not an exact search is performed, but taking into account the permissible deviation of data from the reference; experimentally verified the effectiveness of the classification, which indicates the efficiency and effectiveness of the proposed method. The practical significance of the work is the construction of classification models in the transformed space of the hash data representation, the efficiency confirmation of the proposed classifiers modifications on image examples, development of applied software models implementing the proposed classification methods in computer vision systems.


2021 ◽  
Vol 5 (3) ◽  
pp. 5-12
Author(s):  
Volodymyr Gorokhovatsky ◽  
Natalia Stiahlyk ◽  
Vytaliia Tsarevska

The subject of research of the paper is the methods of image classification on a set of key point descriptors in computer vision systems. The goal is to improve the performance of structural classification methods by introducing indexed hash structures on the set of the dataset reference images descriptors and a consistent chain combination of several stages of data analysis in the classification process. Applied methods: BRISK detector and descriptors, data hashing tools, search methods in large data arrays, metric models for the vector relevance estimation, software modeling. The obtained results: developed an effective method of image classification based on the introduction of high-speed search using indexed hash structures, that speeds up the calculation dozens of times; the gain in computing time increases with an increase of the number of reference images and descriptors in descriptions; the peculiarity of the classifier is that not an exact search is performed, but taking into account the permissible deviation of data from the reference; experimentally verified the effectiveness of the classification, which indicates the efficiency and effectiveness of the proposed method. The practical significance of the work is the construction of classification models in the transformed space of the hash data representation, the efficiency confirmation of the proposed classifiers modifications on image examples, development of applied software models implementing the proposed classification methods in computer vision systems.


2021 ◽  
Vol 5 (3) ◽  
pp. 13-17
Author(s):  
Pavlo Pustovoitov ◽  
Maxim Okhrimenko ◽  
Vitalii Voronets ◽  
Dmitry Udalov

The subject of this research is the image classification methods based on a set of key points descriptors. The goal is to increase the performance of classification methods, in particular, to improve the time characteristics of classification by introducing hashing tools for reference data representation. Methods used: ORB detector and descriptors, data hashing tools, search methods in data arrays, metrics-based apparatus for determining the relevance of vectors, software modeling. The obtained results: developed an effective method of image classification based on the introduction of high-speed search using hash structures, which speeds up the calculation dozens of times; the classification time for the considered experimental descriptions increases linearly with decreasing number of hashes; the minimum metric value limit choice on setting the class for object descriptors significantly affects the accuracy of classification; the choice of such limit can be optimized for fixed samples databases; the experimentally achieved accuracy of classification indicates the efficiency of the proposed method based on data hashing. The practical significance of the work is - the classification model’s synthesis in the hash data representations space, efficiency proof of the proposed classifiers modifications on image examples, development of applied software models implementing the proposed classification methods in computer vision systems.


Author(s):  
José Antonio Hernández López ◽  
Javier Luis Cánovas Izquierdo ◽  
Jesús Sánchez Cuadrado

AbstractThe application of machine learning (ML) algorithms to address problems related to model-driven engineering (MDE) is currently hindered by the lack of curated datasets of software models. There are several reasons for this, including the lack of large collections of good quality models, the difficulty to label models due to the required domain expertise, and the relative immaturity of the application of ML to MDE. In this work, we present ModelSet, a labelled dataset of software models intended to enable the application of ML to address software modelling problems. To create it we have devised a method designed to facilitate the exploration and labelling of model datasets by interactively grouping similar models using off-the-shelf technologies like a search engine. We have built an Eclipse plug-in to support the labelling process, which we have used to label 5,466 Ecore meta-models and 5,120 UML models with its category as the main label plus additional secondary labels of interest. We have evaluated the ability of our labelling method to create meaningful groups of models in order to speed up the process, improving the effectiveness of classical clustering methods. We showcase the usefulness of the dataset by applying it in a real scenario: enhancing the MAR search engine. We use ModelSet to train models able to infer useful metadata to navigate search results. The dataset and the tooling are available at https://figshare.com/s/5a6c02fa8ed20782935c and a live version at http://modelset.github.io.


Author(s):  
Alexey Bogomolov

The article provides a comprehensive description of information technologies of digital adaptive medicine. The emphasis is on the applicability to the development of specialized automated complexes, software models and systems for studying the adaptive capabilities of a person to environmental conditions. Requirements for information technologies to enhance these capabilities are formulated. The features of information technologies are reflected in relation to the implementation of applied systemic studies of life support, preservation of professional health and prolongation of human longevity. Six basic concepts of adaptive medicine with an emphasis on the features of the mathematical support for information processing are characterized, priorities for improving information technologies used in these concepts are determined. The information technologies used in the tasks of ensuring the professional performance of a person with an emphasis on the need to use adequate methods for diagnosing the state of a person at all stages of professional activity and the need to develop technologies for digital twins that adequately simulate the adaptation processes and reactions of the body in real conditions are considered. The characteristics of information technologies for personalized monitoring of health risks are given, which make it possible to objectify the effects of physical factors of the conditions of activity and to implement individual and collective informing of personnel about environmental hazards. The urgent need to standardize information processing methods in the development of information technologies for digital adaptive medicine in the interests of ensuring physiological adequacy and mathematical correctness of approaches to obtaining and processing information about a person's state is shown. It is concluded that the priorities for improving information technologies of digital adaptive medicine are associated with the implementation of the achievements of the fourth industrial revolution, including the concept of sociocyberphysical systems.


Sign in / Sign up

Export Citation Format

Share Document