scholarly journals A Fuzzy-Decomposition Grey Modeling Procedure for Management Decision Analysis

2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Jianhong Guo ◽  
Che-Jung Chang ◽  
Yingyi Huang ◽  
Kun-Peng Yu

To cope with the increasingly fierce market competition environment, enterprises need to quickly respond to business issues and maintain business advantages, which require timely and correct decisions. In this context, the general mathematical modeling method may cause overfitting phenomenon when using small data sets, so it is difficult to ensure good analysis performance. Therefore, it is significant for enterprises to use limited samples to analyze and forecast. Over the past few decades, the grey model and its extensions have been shown to be effective tools for processing small data sets. To further enforce the effectiveness of data uncertainty processing, a fuzzy-decomposition modeling procedure for grey models is developed. Specifically, Latent Information (LI) function is employed to decompose the initial series into three subseries; next, the three subseries are used to build three grey models and create the estimated values of the three subseries; finally, the weighted average method is applying to combine the estimated values of the three subseries into a single final predicted value. After the actual test on the monthly demand data of the thin-film transistor liquid crystal display panels, the proposed fuzzy-decomposition modeling procedure can result in good prediction outcomes and is thus an appropriate decision analysis tool for managers.

Author(s):  
Che-Jung Chang ◽  
Guiping Li ◽  
Shao-Qing Zhang ◽  
Kun-Peng Yu

Effective determination of trends in sulfur dioxide emissions facilitates national efforts to draft an appropriate policy that aims to lower sulfur dioxide emissions, which is essential for reducing atmospheric pollution. However, to reflect the current situation, a favorable emission reduction policy should be based on updated information. Various forecasting methods have been developed, but their applications are often limited by insufficient data. Grey system theory is one potential approach for analyzing small data sets. In this study, an improved modeling procedure based on the grey system theory and the mega-trend-diffusion technique is proposed to forecast sulfur dioxide emissions in China. Compared with the results obtained by the support vector regression and the radial basis function network, the experimental results indicate that the proposed procedure can effectively handle forecasting problems involving small data sets. In addition, the forecast predicts a steady decline in China’s sulfur dioxide emissions. These findings can be used by the Chinese government to determine whether its current policy to reduce sulfur dioxide emissions is appropriate.


2002 ◽  
Vol 2 (1) ◽  
pp. 51-57 ◽  
Author(s):  
I Gusti Ngurah Darmawan

Evaluation studies often lack sophistication in their statistical analyses, particularly where there are small data sets or missing data. Until recently, the methods used for analysing incomplete data focused on removing the missing values, either by deleting records with incomplete information or by substituting the missing values with estimated mean scores. These methods, though simple to implement, are problematic. However, recent advances in theoretical and computational statistics have led to more flexible techniques with sound statistical bases. These procedures involve multiple imputation (MI), a technique in which the missing values are replaced by m > 1 estimated values, where m is typically small (e.g. 3-10). Each of the resultant m data sets is then analysed by standard methods, and the results are combined to produce estimates and confidence intervals that incorporate missing data uncertainty. This paper reviews the key ideas of multiple imputation, discusses the currently available software programs relevant to evaluation studies, and demonstrates their use with data from a study of the adoption and implementation of information technology in Bali, Indonesia.


Author(s):  
Jianping Ju ◽  
Hong Zheng ◽  
Xiaohang Xu ◽  
Zhongyuan Guo ◽  
Zhaohui Zheng ◽  
...  

AbstractAlthough convolutional neural networks have achieved success in the field of image classification, there are still challenges in the field of agricultural product quality sorting such as machine vision-based jujube defects detection. The performance of jujube defect detection mainly depends on the feature extraction and the classifier used. Due to the diversity of the jujube materials and the variability of the testing environment, the traditional method of manually extracting the features often fails to meet the requirements of practical application. In this paper, a jujube sorting model in small data sets based on convolutional neural network and transfer learning is proposed to meet the actual demand of jujube defects detection. Firstly, the original images collected from the actual jujube sorting production line were pre-processed, and the data were augmented to establish a data set of five categories of jujube defects. The original CNN model is then improved by embedding the SE module and using the triplet loss function and the center loss function to replace the softmax loss function. Finally, the depth pre-training model on the ImageNet image data set was used to conduct training on the jujube defects data set, so that the parameters of the pre-training model could fit the parameter distribution of the jujube defects image, and the parameter distribution was transferred to the jujube defects data set to complete the transfer of the model and realize the detection and classification of the jujube defects. The classification results are visualized by heatmap through the analysis of classification accuracy and confusion matrix compared with the comparison models. The experimental results show that the SE-ResNet50-CL model optimizes the fine-grained classification problem of jujube defect recognition, and the test accuracy reaches 94.15%. The model has good stability and high recognition accuracy in complex environments.


2020 ◽  
Vol 98 (Supplement_4) ◽  
pp. 8-9
Author(s):  
Zahra Karimi ◽  
Brian Sullivan ◽  
Mohsen Jafarikia

Abstract Previous studies have shown that the accuracy of Genomic Estimated Breeding Value (GEBV) as a predictor of future performance is higher than the traditional Estimated Breeding Value (EBV). The purpose of this study was to estimate the potential advantage of selection on GEBV for litter size (LS) compared to selection on EBV in the Canadian swine dam line breeds. The study included 236 Landrace and 210 Yorkshire gilts born in 2017 which had their first farrowing after 2017. GEBV and EBV for LS were calculated with data that was available at the end of 2017 (GEBV2017 and EBV2017, respectively). De-regressed EBV for LS in July 2019 (dEBV2019) was used as an adjusted phenotype. The average dEBV2019 for the top 40% of sows based on GEBV2017 was compared to the average dEBV2019 for the top 40% of sows based on EBV2017. The standard error of the estimated difference for each breed was estimated by comparing the average dEBV2019 for repeated random samples of two sets of 40% of the gilts. In comparison to the top 40% ranked based on EBV2017, ranking based on GEBV2017 resulted in an extra 0.45 (±0.29) and 0.37 (±0.25) piglets born per litter in Landrace and Yorkshire replacement gilts, respectively. The estimated Type I errors of the GEBV2017 gain over EBV2017 were 6% and 7% in Landrace and Yorkshire, respectively. Considering selection of both replacement boars and replacement gilts using GEBV instead of EBV can translate into increased annual genetic gain of 0.3 extra piglets per litter, which would more than double the rate of gain observed from typical EBV based selection. The permutation test for validation used in this study appears effective with relatively small data sets and could be applied to other traits, other species and other prediction methods.


Author(s):  
Jungeui Hong ◽  
Elizabeth A. Cudney ◽  
Genichi Taguchi ◽  
Rajesh Jugulum ◽  
Kioumars Paryani ◽  
...  

The Mahalanobis-Taguchi System is a diagnosis and predictive method for analyzing patterns in multivariate cases. The goal of this study is to compare the ability of the Mahalanobis-Taguchi System and a neural network to discriminate using small data sets. We examine the discriminant ability as a function of data set size using an application area where reliable data is publicly available. The study uses the Wisconsin Breast Cancer study with nine attributes and one class.


2018 ◽  
Vol 121 (16) ◽  
Author(s):  
Wei-Chia Chen ◽  
Ammar Tareen ◽  
Justin B. Kinney

2011 ◽  
Vol 19 (2-3) ◽  
pp. 133-145
Author(s):  
Gabriela Turcu ◽  
Ian Foster ◽  
Svetlozar Nestorov

Text analysis tools are nowadays required to process increasingly large corpora which are often organized as small files (abstracts, news articles, etc.). Cloud computing offers a convenient, on-demand, pay-as-you-go computing environment for solving such problems. We investigate provisioning on the Amazon EC2 cloud from the user perspective, attempting to provide a scheduling strategy that is both timely and cost effective. We derive an execution plan using an empirically determined application performance model. A first goal of our performance measurements is to determine an optimal file size for our application to consume. Using the subset-sum first fit heuristic we reshape the input data by merging files in order to match as closely as possible the desired file size. This also speeds up the task of retrieving the results of our application, by having the output be less segmented. Using predictions of the performance of our application based on measurements on small data sets, we devise an execution plan that meets a user specified deadline while minimizing cost.


Sign in / Sign up

Export Citation Format

Share Document