Multi-Dimensional Software Tool for OSS Project Management Considering Cloud with Big Data

Author(s):  
Yoshinobu Tamura ◽  
Shigeru Yamada

We discuss a method of software dependability assessment based on stochastic differential equation modeling in order to consider the interesting factors of the numbers of components, cloud applications and users. Then, we consider the determination of an optimum software maintenance time, minimizing the total expected software cost. In particular, we develop the three-dimensional AIR application for reliability and cost optimization analyses based on the proposed method. Moreover, the three-dimensional application based on NW.js, known as the latest technology, is developed in this paper. Furthermore, we show several numerical examples of the developed application software to evaluate the performance of software optimization and reliability assessment tools for the big data on cloud computing.

Author(s):  
Yoshinobu Tamura ◽  
Shigeru Yamada

This paper focuses on a big data on cloud computing environment by using open source software such as Open Stack and Eucalyptus because of the unification management of data and low cost. We propose a new approach to software dependability assessment based on stochastic differential equation modelling and jump diffusion process modelling in order to consider the interesting aspect of the numbers of components, cloud applications, and users. Moreover, we discuss the determination of an optimum software maintenance time minimizing the total expected software cost. In particular, we develop the three-dimensional AIR application for reliability and cost optimization analysis based on the proposed method. Then, we show numerical performance of the developed AIR application to evaluate the method of software reliability assessment for the big data on cloud computing.


Author(s):  
Yoshinobu Tamura ◽  
Shigeru Yamada

We focus on double irregular fluctuations under jump in the operation performance of open source software (OSS). Then, this paper proposes the method of cost optimization based on flexible jump diffusion process (JDP) model in order to consider several noisy cases for maintenance effort in the OSS operation with version upgrade. In particular, we discuss a method of effort optimization based on the flexible JDP model with the unexpected irregular continuous fluctuation in version upgrade for OSS projects. The proposed method will be useful for the OSS project managers to decide the optimal version upgrade and maintenance time of OSS under the OSS project management. Furthermore, we show several analysis examples of the optimization method considering the properties of version upgrade under OSS projects.


Author(s):  
YOSHINOBU TAMURA ◽  
SHIGERU YAMADA

At present, a cloud computing is attracting attention as a network service to share the computing resources, i.e., networks, servers, storage, applications, and services. We focus on a cloud computing environment by using open source software such as OpenStack and Eucalyptus because of the unification management of data and low cost. In this paper, we propose a new approach to software dependability assessment based on stochastic differential equation modeling in order to consider the interesting aspect of the numbers of components, cloud applications, and users. Also, we analyze actual data to show numerical examples of software dependability assessment considering such characteristics of cloud computing. Moreover, we discuss the determination of optimum software maintenance times minimizing the total expected software cost.


Author(s):  
Raimo Hartmann ◽  
Hannah Jeckel ◽  
Eric Jelli ◽  
Praveen K. Singh ◽  
Sanika Vaidya ◽  
...  

AbstractBiofilms are microbial communities that represent a highly abundant form of microbial life on Earth. Inside biofilms, phenotypic and genotypic variations occur in three-dimensional space and time; microscopy and quantitative image analysis are therefore crucial for elucidating their functions. Here, we present BiofilmQ—a comprehensive image cytometry software tool for the automated and high-throughput quantification, analysis and visualization of numerous biofilm-internal and whole-biofilm properties in three-dimensional space and time.


2021 ◽  
pp. 1-10
Author(s):  
Meng Huang ◽  
Shuai Liu ◽  
Yahao Zhang ◽  
Kewei Cui ◽  
Yana Wen

The integration of Artificial Intelligence technology and school education had become a future trend, and became an important driving force for the development of education. With the advent of the era of big data, although the relationship between students’ learning status data was closer to nonlinear relationship, combined with the application analysis of artificial intelligence technology, it could be found that students’ living habits were closely related to their academic performance. In this paper, through the investigation and analysis of the living habits and learning conditions of more than 2000 students in the past 10 grades in Information College of Institute of Disaster Prevention, we used the hierarchical clustering algorithm to classify the nearly 180000 records collected, and used the big data visualization technology of Echarts + iView + GIS and the JavaScript development method to dynamically display the students’ life track and learning information based on the map, then apply Three Dimensional ArcGIS for JS API technology showed the network infrastructure of the campus. Finally, a training model was established based on the historical learning achievements, life trajectory, graduates’ salary, school infrastructure and other information combined with the artificial intelligence Back Propagation neural network algorithm. Through the analysis of the training resulted, it was found that the students’ academic performance was related to the reasonable laboratory study time, dormitory stay time, physical exercise time and social entertainment time. Finally, the system could intelligently predict students’ academic performance and give reasonable suggestions according to the established prediction model. The realization of this project could provide technical support for university educators.


Author(s):  
HYEON SOO KIM ◽  
YONG RAE KWON ◽  
IN SANG CHUNG

Software restructuring is recognized as a promising method to improve logical structure and understandability of a software system which is composed of modules with loosely-coupled elements. In this paper, we present methods of restructuring an ill-structured module at the software maintenance phase. The methods identify modules performing multiple functions and restructure such modules. For identifying the multi-function modules, the notion of the tightly-coupled module that performs a single specific function is formalized. This method utilizes information on data and control dependence, and applies program slicing to carry out the task of extracting the tightly-coupled modules from the multi-function module. The identified multi-function module is restructured into a number of functional strength modules or an informational strength module. The module strength is used as a criterion to decide how to restructure. The proposed methods can be readily automated and incorporated in a software tool.


Author(s):  
Annamaria Kubovcikova

Purpose – The purpose of this paper is to test the properties of the well-known three-dimensional adjustment scale, established by Black et al. (1988, 1989), namely, its dimensionality and internal consistency. The theoretical basis of the construct is discussed in relation to formative and reflective measurement approaches. Design/methodology/approach – Two different ways of organizing the adjustment items (random/non-random) were used to assess the internal consistency of the three-dimensional adjustment scale. The quantitative analysis presented is based on survey data from 468 assigned expatriates in Asia that were subjected to an exploratory and confirmatory factor analysis as well as a structural equation modeling – more specifically the multiple indicators multiple causes model (MIMIC). Findings – The study revealed that the adjustment construct is possibly misspecified, especially the general adjustment dimension, that was tested as a formative, not a reflective scale. There is further evidence that the wrong measurement approach skewed the coefficient that connects adjustment to performance, which is the key construct in its nomological network. Moreover, the dimensionality and the internal consistency of the scale are deteriorated to a large extent by randomization of the items. The findings highlight the need for a clear concept definition that would lead to an appropriate operationalization of the construct. Originality/value – The study is one of the few rigorously testing the properties of a construct that has been used for almost 30 years, thus yielding some novel conclusions about its stability and consistency.


Author(s):  
Rola Khamisy-Farah ◽  
Leonardo B. Furstenau ◽  
Jude Dzevela Kong ◽  
Jianhong Wu ◽  
Nicola Luigi Bragazzi

Tremendous scientific and technological achievements have been revolutionizing the current medical era, changing the way in which physicians practice their profession and deliver healthcare provisions. This is due to the convergence of various advancements related to digitalization and the use of information and communication technologies (ICTs)—ranging from the internet of things (IoT) and the internet of medical things (IoMT) to the fields of robotics, virtual and augmented reality, and massively parallel and cloud computing. Further progress has been made in the fields of addictive manufacturing and three-dimensional (3D) printing, sophisticated statistical tools such as big data visualization and analytics (BDVA) and artificial intelligence (AI), the use of mobile and smartphone applications (apps), remote monitoring and wearable sensors, and e-learning, among others. Within this new conceptual framework, big data represents a massive set of data characterized by different properties and features. These can be categorized both from a quantitative and qualitative standpoint, and include data generated from wet-lab and microarrays (molecular big data), databases and registries (clinical/computational big data), imaging techniques (such as radiomics, imaging big data) and web searches (the so-called infodemiology, digital big data). The present review aims to show how big and smart data can revolutionize gynecology by shedding light on female reproductive health, both in terms of physiology and pathophysiology. More specifically, they appear to have potential uses in the field of gynecology to increase its accuracy and precision, stratify patients, provide opportunities for personalized treatment options rather than delivering a package of “one-size-fits-it-all” healthcare management provisions, and enhance its effectiveness at each stage (health promotion, prevention, diagnosis, prognosis, and therapeutics).


Sign in / Sign up

Export Citation Format

Share Document