Journal of the Brazilian Computer Society
Latest Publications


TOTAL DOCUMENTS

603
(FIVE YEARS 38)

H-INDEX

17
(FIVE YEARS 3)

Published By Springer (Biomed Central Ltd.)

1678-4804, 0104-6500

2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Daniel R.F. Apolinário ◽  
Breno B.N. de França

AbstractThe microservice architecture is claimed to satisfy ongoing software development demands, such as resilience, flexibility, and velocity. However, developing applications based on microservices also brings some drawbacks, such as the increased software operational complexity. Recent studies have also pointed out the lack of methods to prevent problems related to the maintainability of these solutions. Disregarding established design principles during the software evolution may lead to the so-called architectural erosion, which can end up in a condition of unfeasible maintenance. As microservices can be considered a new architecture style, there are few initiatives to monitoring the evolution of software microservice-based architectures. In this paper, we introduce the SYMBIOTE method for monitoring the coupling evolution of microservice-based systems. More specifically, this method collects coupling metrics during runtime (staging or production environments) and monitors them throughout software evolution. The longitudinal analysis of the collected measures allows detecting an upward trend in coupling metrics that could represent signs of architectural degradation. To develop the proposed method, we performed an experimental analysis of the coupling metrics behavior using artificially generated data. The results of these experiment revealed the metrics behavior in different scenarios, providing insights to develop the analysis method for the identification of architectural degradation. We evaluated the SYMBIOTE method in a real-case open source project called Spinnaker. The results obtained in this evaluation show the relationship between architectural changes and upward trends in coupling metrics for most of the analyzed release intervals. Therefore, the first version of SYMBIOTE has shown potential to detect signs of architectural degradation during the evolution of microservice-based architectures.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Jairo G. de Freitas ◽  
Keiji Yamanaka

AbstractThere is a wide variety of computational methods used for solving optimization problems. Among these, there are various strategies that are derived from the concept of ant colony optimization (ACO). However, the great majority of these methods are limited-range-search algorithms, that is, they find the optimal solution, as long as the domain provided contains this solution. This becomes a limitation, due to the fact that it does not allow these algorithms to be applied successfully to real-world problems, as in the real world, it is not always possible to determine with certainty the correct domain. The article proposes the use of a broad-range search algorithm, that is, that seeks the optimal solution, with success most of the time, even if the initial domain provided does not contain this solution, as the initial domain provided will be adjusted until it finds a domain that contains the solution. This algorithm called ARACO, derived from RACO, makes for the obtaining of better results possible, through strategies that accelerate the parameters responsible for adjusting the supplied domain at opportune moments and, in case there is a stagnation of the algorithm, expansion of the domain around the best solution found to prevent the algorithm becoming trapped in a local minimum. Through these strategies, ARACO obtains better results than its predecessors, in relation to the number of function evaluations necessary to find the optimal solution, in addition to its 100% success rate in practically all the tested functions, thus demonstrating itself as being a high performance and reliable algorithm. The algorithm has been tested on some classic benchmark functions and also on the benchmark functions of the IEEE Congress of Evolutionary Computation Benchmark Test Functions (CEC 2019 100-Digit Challenge).


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Vinícius Ferreira Galvão ◽  
Cristiano Maciel ◽  
Roberto Pereira ◽  
Isabela Gasparini ◽  
José Viterbo ◽  
...  

AbstractIntense social media interaction, wearable devices, mobile applications, and pervasive use of sensors have created a personal information ecosystem for gathering traces of individual behavior. These traces are the digital legacy individuals build all through their lives. Advances in artificial intelligence have fed our dream to build artificial agents trained with these digital traces to behave similarly to a deceased person, and individuals are facing the possibility of immortalizing their ideas, reasoning and behavior. Are people prepared for that? Are people willing to do that? How do people perceive the possibility of letting digital avatars take care of their digital legacy? This paper sheds light on these questions by discussing users’ perceptions towards digital immortality in a focus group analysis with 8 participants. Our findings suggest some key human values must be addressed. These findings can serve as preliminary thoughts to inform system design, from the very early stage of development, that preserve the digital legacy while respecting the human needs and values concerning the delicate emotional moment that death provides. This qualitative research analyzes the data, and based on the insights learned, proposes important considerations for future developments in this area.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Diego Rodrigues ◽  
Altigran da Silva

AbstractSchema matching is the problem of finding semantic correspondences between elements from different schemas. This is a challenging problem since disparate elements in the schemas often represent the same concept. Traditional instances of this problem involved a pair of schemas. However, recently, there has been an increasing interest in matching several related schemas at once, a problem known as schema matching networks. The goal is to identify elements from several schemas that correspond to a single concept. We propose a family of methods for schema matching networks based on machine learning, which proved to be a competitive alternative for the traditional matching problem in several domains. To overcome the issue of requiring a large amount of training data, we also propose a bootstrapping procedure to generate training data automatically. In addition, we leverage constraints that arise in network scenarios to improve the quality of this data. We also study a strategy for receiving user feedback to assert some of the matchings generated and, relying on this feedback, improve the final result’s quality. Our experiments show that our methods can outperform baselines, reaching F1-score up to 0.83.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
João M. M. C. Cota ◽  
Alberto H. F. Laender ◽  
Raquel O. Prates

AbstractIdentifying and studying the formation of researchers over the years is a challenging task, since the current repositories of theses and dissertations are cataloged in a decentralized manner in different digital libraries, many of them with limited scope. In this article, we report our efforts towards building a large repository to record the Brazilian academic genealogy. For this, we collected data from the Lattes platform, an internationally recognized initiative that provides a repository of researchers’ curricula maintained by the Brazilian National Council for Scientific and Technological Development (CNPq), and developed a user-oriented platform, named Science Tree, to generate the academic genealogy trees of Brazilian researchers from them, also providing additional data resulting from a series of analyses regarding the main properties of such trees. In order to assess the facilities provided by the Science Tree platform, we conducted an experimental evaluation of it with two groups of users, the first one consisting of 286 researchers who answered an evaluation questionnaire and the second one involving seven researchers with large academic experience who agreed to participate in a face-to-face assessment conducted through a personal interview, during which they performed some pre-defined tasks. The results of these two evaluations with typical users enabled us not only to validate the main features offered by the platform, but also to identify new ones that could be added to it in the future. Overall, our effort has allowed us to identify interesting aspects related to the academic career of the Brazilian researchers, thus highlighting the importance of generating and cataloging their academic genealogy trees.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Paulo Drews-Jr ◽  
Isadora de Souza ◽  
Igor P. Maurell ◽  
Eglen V. Protas ◽  
Silvia S. C. Botelho

AbstractImage segmentation is an important step in many computer vision and image processing algorithms. It is often adopted in tasks such as object detection, classification, and tracking. The segmentation of underwater images is a challenging problem as the water and particles present in the water scatter and absorb the light rays. These effects make the application of traditional segmentation methods cumbersome. Besides that, to use the state-of-the-art segmentation methods to face this problem, which are based on deep learning, an underwater image segmentation dataset must be proposed. So, in this paper, we develop a dataset of real underwater images, and some other combinations using simulated data, to allow the training of two of the best deep learning segmentation architectures, aiming to deal with segmentation of underwater images in the wild. In addition to models trained in these datasets, fine-tuning and image restoration strategies are explored too. To do a more meaningful evaluation, all the models are compared in the testing set of real underwater images. We show that methods obtain impressive results, mainly when trained with our real dataset, comparing with manually segmented ground truth, even using a relatively small number of labeled underwater training images.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Francisco D. B. S. Praciano ◽  
Paulo R. P. Amora ◽  
Italo C. Abreu ◽  
Francisco L. F. Pereira ◽  
Javam C. Machado

Abstract Background Database Management Systems (DBMSs) use declarative language to execute queries to stored data. The DBMS defines how data will be processed and ultimately retrieved. Therefore, it must choose the best option from the different possibilities based on an estimation process. The optimization process uses estimated cardinalities to make optimization decisions, such as choosing predicate order. Methods In this paper, we propose Robust Cardinality, an approach to calculate cardinality estimates of query operations to guide the execution engine of the DBMSs to choose the best possible form or at least avoid the worst one. By using machine learning, instead of the current histogram heuristics, it is possible to improve these estimates; hence, leading to more efficient query execution. Results We perform experimental tests using PostgreSQL, comparing both estimators and a modern technique proposed in the literature. With Robust Cardinality, a lower estimation error of a batch of queries was obtained and PostgreSQL executed these queries more efficiently than when using the default estimator. We observed a 3% reduction in execution time after reducing 4 times the query estimation error. Conclusions From the results, it is possible to conclude that this new approach results in improvements in query processing in DBMSs, especially in the generation of cardinality estimates.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Pedro Lopes de Souza ◽  
Wanderley Lopes de Souza ◽  
Luís Ferreira Pires

AbstractWhen developing a Learning Management System (LMS) using Scrum, we noticed that it was quite often necessary to redefine some system behaviour scenarios, due to ambiguities in the requirement specifications, or due to misinterpretations of stories reported by the Product Owners (POs). The definition of test suites was also cumbersome, resulting in test suites that were incomplete or did not at all comply with the system requirements. Based on this experience and to deal with these problems, in this paper, we propose the ScrumOntoBDD approach to agile software development, which combines Scrum, ontologies and Behaviour-Driven Development (BDD). This approach is centred on the concepts and techniques of Scrum and BDD and focuses on the planning and analysis phases of the software life cycle, since the BDD tools currently provide little support to these phases, while most of the problems during the LMS development were found exactly there. We claim that our approach improves the software development practices in this respect. Furthermore, ScrumOntoBDD employs ontologies in order to reduce ambiguities intrinsic to the use of a natural language as a BDD ubiquitous language. In this paper, we illustrate and systematically evaluate our approach, showing that it is beneficial since it improves the communication between members of an agile development team.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Yusseli Lizeth Méndez Mendoza ◽  
M. Cecília C. Baranauskas

AbstractContemporary computational technology (tangible and ubiquitous) are still challenging the mainstream systems design methods, demanding new ways of considering the interaction design and its evaluation. In this work, we draw on concepts of enactivism and enactive systems to investigate interaction and experience in the context of the ubiquity of computational systems. Our study is illustrated with the design and usage experience of TangiTime: a tangible tabletop system proposed for an educational exhibit. TangiTime was designed to enable a socioenactive experience of interaction with the concept of “deep time.” In this paper, we present the TangiTime design process, the artifacts designed and implemented, in its conceptual, interactional, and architectural aspects. Besides that, we present and discuss results of an exploratory study within an exhibition context, to observe how socioenactive aspects of the experience potentially emerge from the interaction. Overall, the paper contributes with elements of design that should be considered when designing a socioenactive experience in environments constituted by contemporary computational technology.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Aluisio I. R. Fontes ◽  
Leandro L. S. Linhares ◽  
João P. F. Guimarães ◽  
Luiz F. Q. Silveira ◽  
Allan M. Martins

AbstractRecently, the maximum correntropy criterion (MCC) has been successfully applied in numerous applications regarding nonGaussian data processing. MCC employs a free parameter called kernel width, which affects the convergence rate, robustness, and steady-state performance of the adaptive filtering. However, determining the optimal value for such parameter is not always a trivial task. Within this context, this paper proposes a novel method called adaptive convex combination maximum correntropy criterion (ACCMCC), which combines an adaptive kernel algorithm with convex combination techniques. ACCMCC takes advantage from a convex combination of two adaptive MCC-based filters, whose kernel widths are adjusted iteratively as a function of the minimum error value obtained in a predefined estimation window. Results obtained in impulsive noise environment have shown that the proposed approach achieves equivalent convergence rates but with increased accuracy and robustness when compared with other similar algorithms reported in literature.


Sign in / Sign up

Export Citation Format

Share Document