scholarly journals The Rank of Silaturrahmi-Assimilated Collaboration Parameter Based on Core Drive Using Octalysis Gamification Framework and Fuzzy AHP

TEM Journal ◽  
2021 ◽  
pp. 1971-1982
Author(s):  
Fitri Marisa ◽  
Sharifah Sakinah Syed Ahmad ◽  
Zeratul Izzah Mohd Yusoh ◽  
Titien Agustina ◽  
Anastasia L Maukar ◽  
...  

This study aims to build SME collaboration parameters from the assimilation of the "silaturrahmi" culture that considers user motivation in collaborating. Parameter validation involves linear regression statistical methods, followed by determining the ranking of collaboration parameters using a combination model of Octalysis and Fuzzy AHP frameworks. The test resulted in 4 collaboration parameters from the assimilation of "Silaturrahmi" ranked based on the user's level of motivation (core drive) with details: Relationship Building-RB (52.86%), Reciprocal Sustainment-RS (25.44%), Reciprocal Assistant-RA (21.77%) and Active Support- US (0.83%). It can be concluded that the four parameters potentially measure the performance of SME collaboration. The combination model can determine the user's motivation (core drive) for collaboration through these parameters. The ranking results serve as a reference for developing a collaborative framework by prioritizing activities related to the highest weight percentage parameters and evaluating the lowest weight percentage.

2016 ◽  
Vol 124 (12) ◽  
pp. 1848-1856 ◽  
Author(s):  
Lydiane Agier ◽  
Lützen Portengen ◽  
Marc Chadeau-Hyam ◽  
Xavier Basagaña ◽  
Lise Giorgis-Allemand ◽  
...  

2021 ◽  
Vol 20 (2) ◽  
pp. 69
Author(s):  
Billy Nugraha ◽  
Wahyudin Wahyudin ◽  
Fahriza Nurul Azizah

<p class="Abstrak">Research on human resource development have been carried out. With the aim of can be used as parameters to create the development of quality human resources. This research aims to know the influence of opportunities on the assessment of the performance of employees. Opportunities under study is divided into three indicators, namely the environment, the rules and colleagues. Sampling technique in this study using a sampling of saturated. The technique of data analysis was conducted gradually, starting from the editing, coding, scoring and tabulation. In addition, using a likert scale developed by Ransis Likert. While for the development of research using statistical methods, it is necessary to qualify the classical assumption test before the test is conducted multiple linear regression. It aims to determine how much significance of significance testing. The results from the hypothesis t, there is on the value t<sub>count</sub> for the partial (sequential) starting from X<sub>1</sub> = 2,639, X<sub>2</sub> = 3,379 and X<sub>3</sub> = 2,210. In addition, the results of the hypothesis F, contained in the value of F<sub>count</sub> simultaneously (together) was 12,597. While the value of R Square (R<sup>2</sup>) of 0,282, if then the result of 28.2%. It can be concluded from the three independent variables affect the dependent variable by 28,2%.</p>


Author(s):  
Paolo Giudici

Several classes of computational and statistical methods for data mining are available. Each class can be parameterised so that models within the class differ in terms of such parameters (see, for instance, Giudici, 2003; Hastie et al., 2001; Han & Kamber, 2000; Hand et al., 2001; Witten & Frank, 1999): for example, the class of linear regression models, which differ in the number of explanatory variables; the class of Bayesian networks, which differ in the number of conditional dependencies (links in the graph); the class of tree models, which differ in the number of leaves; and the class multi-layer perceptrons, which differ in terms of the number of hidden strata and nodes. Once a class of models has been established the problem is to choose the “best” model from it.


2018 ◽  
Vol 8 (1) ◽  
pp. 25-34 ◽  
Author(s):  
Bingjun Li ◽  
Weiming Yang ◽  
Xiaolu Li

Purpose The purpose of this paper is to address and overcome the problem that a single prediction model cannot accurately fit a data sequence with large fluctuations. Design/methodology/approach Initially, the grey linear regression combination model was put forward. The Discrete Grey Model (DGM)(1,1) model and the multiple linear regression model were then combined using the entropy weight method. The grain yield from 2010 to 2015 was forecasted using DGM(1,1), a multiple linear regression model, the combined model and a GM(1,N) model. The predicted values were then compared against the actual values. Findings The results reveal that the combination model used in this paper offers greater simulation precision. The combination model can be applied to the series with fluctuations and the weights of influencing factors in the model can be objectively evaluated. The simulation accuracy of GM(1,N) model fluctuates greatly in this prediction. Practical implications The combined model adopted in this paper can be applied to grain forecasting to improve the accuracy of grain prediction. This is important as data on grain yield are typically characterised by large fluctuation and some information is often missed. Originality/value This paper puts the grey linear regression combination model which combines the DGM(1,1) model and the multiple linear regression model using the entropy weight method to determine the results weighting of the two models. It is intended that prediction accuracy can be improved through the combination of models used within this paper.


2018 ◽  
Vol 1 (1) ◽  
pp. 087-094
Author(s):  
Ukurta Tarigan ◽  
Erwin Sitorus ◽  
Veronica Veronica

PT. PQR merupakan perusahaan yang bergerak dalam produksi bata ringan. Bata ringan diproduksi menggunakan sistem Autoclaved Aerated Concrete (AAC). Dari hasil pengamatan di perusahaan, produk bata ringan sering mengalami kerusakaan karena jatuh dari pallet saat dipindahkan menggunakan hand lift. Hal ini disebabkan oleh jauhnya jarak perpindahan antara lantai produksi dan gudang produk, sehingga operator kesulitan dalam proses pemindahan. Tujuan dari penelitian ini adalah untuk merancang ulang tata letak fasilitas dengan menggunakan pendekatan metode Fuzzy Analytical Hierarchy Process (Fuzzy AHP) disertai dengan metode Systematic Layout Planning (SLP) untuk mendapatkan rancangan yang efisien. Hasil pembobotan matriks berpasangan menunjukkan tenaga kerja adalah variabel yang memiliki bobot kriteria tertinggi sebesar 0,2735. Setelah dilakukan analisis didapatkan bahwa kedekatan antar departemen sangat dipengaruhi oleh perpindahan tenaga kerja (operator) antar departemen. Dari hasil analisis, dirancang activity relationship chart (ARC) untuk mendapatkan kedekatan antar departemen, kemudian dirancang layout baru berdasarkan pertimbangan luas masing-masing departemen. Hasil rancangan ini dibandingkan dengan rancangan awal, sehingga diperoleh pengurangan jarak sebesar 165,01 m. PT. PQR is a company engaged in the production of lightweight bricks. Lightweight bricks are produced by using the Autoclaved Aerated Concrete (AAC) system. From the observations in the company obtained results thatight brick products often experience damage due to falling from the pallet when moved using a hand lift. This is due to the distance between the production floor and the product warehouse, so operators have difficulty in the process of moving. The purpose of this study was to redesign the layout of the facility using the Fuzzy Analytical Hierarchy Process (Fuzzy AHP) with the Systematic Layout Planning (SLP) method to obtain an efficient design. The paired matrix weighting results showed that labor is a variable that has the highest weight of 0.2735. After an analysis was conducted, it was found that the closeness between departments was strongly influenced by the movement of labor (operator) between departments. The results of analysis, a relationship chart (ARC) activity was designed to get closeness between departments, then a new layout was designed based on the broad consideration of each department. Furthermore, the results of this design were compared with the initial design, such that obtained a distance reduction of 165.01m .


Hydrology ◽  
2021 ◽  
Vol 8 (4) ◽  
pp. 153
Author(s):  
Eva Melišová ◽  
Adam Vizina ◽  
Martin Hanel ◽  
Petr Pavlík ◽  
Petra Šuhájková

Evaporation is an important factor in the overall hydrological balance. It is usually derived as the difference between runoff, precipitation and the change in water storage in a catchment. The magnitude of actual evaporation is determined by the quantity of available water and heavily influenced by climatic and meteorological factors. Currently, there are statistical methods such as linear regression, random forest regression or machine learning methods to calculate evaporation. However, in order to derive these relationships, it is necessary to have observations of evaporation from evaporation stations. In the present study, the statistical methods of linear regression and random forest regression were used to calculate evaporation, with part of the models being designed manually and the other part using stepwise regression. Observed data from 24 evaporation stations and ERA5-Land climate reanalysis data were used to create the regression models. The proposed regression formulas were tested on 33 water reservoirs. The results show that manual regression is a more appropriate method for calculating evaporation than stepwise regression, with the caveat that it is more time consuming. The difference between linear and random forest regression is the variance of the data; random forest regression is better able to fit the observed data. On the other hand, the interpretation of the result for linear regression is simpler. The study introduced that the use of reanalyzed data, ERA5-Land products using the random forest regression method is suitable for the calculation of evaporation from water reservoirs in the conditions of the Czech Republic.


Author(s):  
Paolo Giudici

Several classes of computational and statistical methods for data mining are available. Each class can be parameterised so that models within the class differ in terms of such parameters (See for instance Giudici, 2003, Hastie et al., 2001, Han and Kamber, 200, Hand et al, 2001 and Witten and Frank, 1999). For example the class of linear regression models, which differ in the number of explanatory variables; the class of bayesian networks, which differ in the number of conditional dependencies (links in the graph); the class of tree models, which differ in the number of leaves and the class multi-layer perceptrons which differ in terms of the number of hidden strata and nodes. Once a class of models has been established the problem is to choose the “best” model from it.


1992 ◽  
Vol 17 (1) ◽  
pp. 51-74 ◽  
Author(s):  
Clifford C. Clogg ◽  
Eva Petkova ◽  
Edward S. Shihadeh

We give a unified treatment of statistical methods for assessing collapsibility in regression problems, including some possible extensions to the class of generalized linear models. Terminology is borrowed from the contingency table area where various methods for assessing collapsibility have been proposed. Our procedures, however, can be motivated by considering extensions, and alternative derivations, of common procedures for omitted-variable bias in linear regression. Exact tests and interval estimates with optimal properties are available for linear regression with normal errors, and asymptotic procedures follow for models with estimated weights. The methods given here can be used to compareβ1 and β2 in the common setting where the response function is first modeled asXβ1(reduced model) and then asXβ2+Zγ(full model), withZ a vector of covariates omitted from the reduced model. These procedures can be used in experimental settings (X= randomly asigned treatments,Z= covariates) or in nonexperimental settings where two models viewed as alternative behavioral or structural explanations are compared (one model withX only, another model withX andZ).


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1742
Author(s):  
Hugo Barros ◽  
Teresa Pereira ◽  
António G. Ramos ◽  
Fernanda A. Ferreira

This paper presents a study on the complexity of cargo arrangements in the pallet loading problem. Due to the diversity of perspectives that have been presented in the literature, complexity is one of the least studied practical constraints. In this work, we aim to refine and propose a new set of metrics to measure the complexity of an arrangement of cargo in a pallet. The parameters are validated using statistical methods, such as principal component analysis and multiple linear regression, using data retrieved from the company logistics. Our tests show that the number of boxes was the main variable responsible for explaining complexity in the pallet loading problem.


Sign in / Sign up

Export Citation Format

Share Document