scholarly journals PREDICTING TELECOMMUNICATION TOWER COSTS USING FUZZY SUBTRACTIVE CLUSTERING

2014 ◽  
Vol 21 (1) ◽  
pp. 67-74 ◽  
Author(s):  
Mohamed Marzouk ◽  
Mohamed Alaraby

This paper presents a fuzzy subtractive modelling technique to predict the weight of telecommunication towers which is used to estimate their respective costs. This is implemented through the utilization of data from previously installed telecommunication towers considering four input parameters: a) tower height; b) allowed tilt or deflection; c) antenna subjected area loading; and d) wind load. Telecommunication towers are classified according to designated code (TIA-222-F and TIA-222-G standards) and structures type (Self-Supporting Tower (SST) and Roof Top (RT)). As such, four fuzzy subtractive models are developed to represent the four classes. To build the fuzzy models, 90% of data are utilized and fed to Matlab software as training data. The remaining 10% of the data are utilized to test model performance. Sugeno-Type first order is used to optimize model performance in predicting tower weights. Errors are estimated using Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) for both training and testing data sets. Sensitivity analysis is carried to validate the model and observe the effect of clusters’ radius on models performance.

2015 ◽  
Vol 2 (3) ◽  
pp. 181
Author(s):  
Wiwi Widayani ◽  
Kusrini Kusrini ◽  
Hanif Al Fatta

Pertambahan jumlah penduduk Indonesia serta meningkatkannya permintaan industri akan bawang merah yang tidak diimbangi dengan jumlah produksi mendorong pemerintah membuka impor bawang merah. Impor dilakukan untuk menjaga keseimbangan harga dan pasokan bawang merah sehingga inflasi yang diakibatkan kenaikan harga bawang merah dapat ditekan, namun impor yang tidak tepat jumlah akan mengakibatkan kerugian bagi pihak petani, perlu adanya sistem pendukung dalam menentukan volume impor guna menjaga keseimbangan harga pasar dan pemenuhan kebutuhan bawang merah. Sistem pendukung keputusan yang dirancang menerapkan Fuzzy Inference System (FIS) Tsukamoto. Sistem yang dirancang memungkinkan pengguna untuk melakukan training data dan testing data, proses dalam training data yaitu : 1)Clustering data latih, menggunakan algoritma K-Means 2)Ekstraksi Aturan, 3)Testing data latih, hitung nilai impor dengan fuzzy Tsukamoto, 4)Menganalisa error hasil fuzzy menggunakan MAPE(Means Absolute Percentage Error), 5)Testing Data Uji dan menganalisa hasil error data uji. Hasil Uji Model menunjukan penentuan impor bawang merah dengan parameter input harga petani, harga konsumen, produksi, konsumsi, harga impor dan kurs terhadap 60 data latih menghasilkan error terendah sebesar 0.07 pada 12 cluster, hasil uji mesin inferensi terhadap data uji menghasilkan error sebesar 0.25. Indonesian population growth and increase industrial demand shallot is not matched with number of production prompted the government to opened shallot imports. Import done to maintain the balance price and supply of shallot so inflation caused by rising prices of onion can be suppressed, but not the exact amount of imports would result in losses for the farmers, support system in determining volume imports is need to maintain balance of market price and needs of shallot. Decision support system designed to apply Fuzzy Inference System (FIS) Tsukamoto. The system is allows the user to perform the training data and testing data, the training process performs are: 1) Clustering training data, using the K-Means algorithm 2) Extraction Rule, 3) Testing data, calculate imports value by fuzzy Tsukamoto, 4) analyze the results error using MAPE (Means Absolute Percentage error), 5) testing test data and analyze the results error. The results show the determination of imported shallot with input parameters producer prices, consumer prices, production, consumption, import prices and the exchange rate against 60 training data produces the lowest error of 0:07 in 12 clusters, the inference engine test resulted in an error of 0.25.


2018 ◽  
Vol 57 (04) ◽  
pp. 220-229
Author(s):  
Tung-I Tsai ◽  
Yaofeng Zhang ◽  
Gy-Yi Chao ◽  
Cheng-Chieh Tsai ◽  
Zhigang Zhang

Summary Background: Radiotherapy has serious side effects and thus requires prudent and cautious evaluation. However, obtaining protein expression profiles is expensive and timeconsuming, making it necessary to develop a theoretical and rational procedure for predicting the radiotherapy outcome for bladder cancer when working with limited data. Objective: A procedure for estimating the performance of radiotherapy is proposed in this research. The population domain (range of the population) of proteins and the relationships among proteins are considered to increase prediction accuracy. Methods: This research uses modified extreme value theory (MEVT), which is used to estimate the population domain of proteins, and correlation coefficients and prediction intervals to overcome the lack of knowledge regarding relationships among proteins. Results: When the size of the training data set was 5 samples, the mean absolute percentage error rate (MAPE) was 31.6200%; MAPE fell to 13.5505% when the number of samples was increased to 30. The standard deviation (SD) of forecasting error fell from 3.0609% for 5 samples to 1.2415% for 30 samples. These results show that the proposed procedure yields accurate and stable results, and is suitable for use with small data sets. Conclusions: The results show that considering the relationships among proteins is necessary when predicting the outcome of radiotherapy.


The paper aims to identify input variables of fuzzy systems, generate fuzzy rule bases by using the fuzzy subtractive clustering, and apply fuzzy system of Takagi Sugeno to predict rice stocks in Indonesia. The monthly rice procurement dataset in the period January 2000 to March 2017 are divided into training data (January 2000 to March 2016 and testing data (April 2016 to March 2017). The results of identification of the fuzzy system input variables are lags as system input including . The Input-output clustering fuzzy subtractive and selecting optimal groups by using the cluster thigness measures indicator produced 4 fuzzy rules.The fuzzy system performance in the training data has a value of R2 of 0.8582, while the testing data produces an R2 of 0.7513.


The project “Disease Prediction Model” focuses on predicting the type of skin cancer. It deals with constructing a Convolutional Neural Network(CNN) sequential model in order to find the type of a skin cancer which takes a huge troll on mankind well-being. Since development of programmed methods increases the accuracy at high scale for identifying the type of skin cancer, we use Convolutional Neural Network, CNN algorithm in order to build our model . For this we make use of a sequential model. The data set that we have considered for this project is collected from NCBI, which is well known as HAM10000 dataset, it consists of massive amounts of information regarding several dermatoscopic images of most trivial pigmented lesions of skin which are collected from different sufferers. Once the dataset is collected, cleaned, it is split into training and testing data sets. We used CNN to build our model and using the training data we trained the model , later using the testing data we tested the model. Once the model is implemented over the testing data, plots are made in order to analyze the relation between the echos and loss function. It is also used to analyse accuracy and echos for both training and testing data.


Geophysics ◽  
2021 ◽  
Vol 86 (6) ◽  
pp. KS151-KS160
Author(s):  
Claire Birnie ◽  
Haithem Jarraya ◽  
Fredrik Hansteen

Deep learning applications are drastically progressing in seismic processing and interpretation tasks. However, most approaches subsample data volumes and restrict model sizes to minimize computational requirements. Subsampling the data risks losing vital spatiotemporal information which could aid training, whereas restricting model sizes can impact model performance, or in some extreme cases renders more complicated tasks such as segmentation impossible. We have determined how to tackle the two main issues of training of large neural networks (NNs): memory limitations and impracticably large training times. Typically, training data are preloaded into memory prior to training, a particular challenge for seismic applications in which the data format is typically four times larger than that used for standard image processing tasks (float32 versus uint8). Based on an example from microseismic monitoring, we evaluate how more than 750 GB of data can be used to train a model by using a data generator approach, which only stores in memory the data required for that training batch. Furthermore, efficient training over large models is illustrated through the training of a seven-layer U-Net with input data dimensions of [Formula: see text] (approximately [Formula: see text] million parameters). Through a batch-splitting distributed training approach, the training times are reduced by a factor of four. The combination of data generators and distributed training removes any necessity of data subsampling or restriction of NN sizes, offering the opportunity to use larger networks, higher resolution input data, or move from 2D to 3D problem spaces.


2021 ◽  
Vol 12 (1) ◽  
pp. 1-11
Author(s):  
Kishore Sugali ◽  
Chris Sprunger ◽  
Venkata N Inukollu

Artificial Intelligence and Machine Learning have been around for a long time. In recent years, there has been a surge in popularity for applications integrating AI and ML technology. As with traditional development, software testing is a critical component of a successful AI/ML application. The development methodology used in AI/ML contrasts significantly from traditional development. In light of these distinctions, various software testing challenges arise. The emphasis of this paper is on the challenge of effectively splitting the data into training and testing data sets. By applying a k-Means clustering strategy to the data set followed by a decision tree, we can significantly increase the likelihood of the training data set to represent the domain of the full dataset and thus avoid training a model that is likely to fail because it has only learned a subset of the full data domain.


2008 ◽  
Author(s):  
Pieter Kitslaar ◽  
Michel Frenay ◽  
Elco Oost ◽  
Jouke Dijkstra ◽  
Berend Stoel ◽  
...  

This document describes a novel scheme for the automated extraction of the central lumen lines of coronary arteries from computed tomography angiography (CTA) data. The scheme first obtains a seg- mentation of the whole coronary tree and subsequently extracts the centerlines from this segmentation. The first steps of the segmentation algorithm consist of the detection of the aorta and the entire heart region. Next, candidate coronary artery components are detected in the heart region after the masking of the cardiac blood pools. Based on their location and geometrical properties the structures representing the right and left arterties are selected from the candidate list. Starting from the aorta, connections between these structures are made resulting in a final segmentation of the whole coronary artery tree, A fast-marching level set method combined with a backtracking algorithm is employed to obtain the initial centerlines within this segmentation. For all vessels a curved multiplanar reformatted image (CMPR) is constructed and used to detect the lumen contours. The final centerline was then defined by determining the center of gravity of the detected lumen in the transversal CMPR slices. Within the scope of the MICCAI Challenge “Coronary Artery Tracking 2008”, the coronary tree segmentation and centerline extraction scheme was used to automatically detect a set of centerlines in 24 datasets. For 8 data sets reference centerlines were available. This training data was used during the development and tuning of the algorithm. Sixteen other data sets were provided as testing data. Evaluation of the proposed methodology was performed through submission of the resulting centerlines to the MICCAI Challenge website


Water ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 2951 ◽  
Author(s):  
Assefa M. Melesse ◽  
Khabat Khosravi ◽  
John P. Tiefenbacher ◽  
Salim Heddam ◽  
Sungwon Kim ◽  
...  

Electrical conductivity (EC), one of the most widely used indices for water quality assessment, has been applied to predict the salinity of the Babol-Rood River, the greatest source of irrigation water in northern Iran. This study uses two individual—M5 Prime (M5P) and random forest (RF)—and eight novel hybrid algorithms—bagging-M5P, bagging-RF, random subspace (RS)-M5P, RS-RF, random committee (RC)-M5P, RC-RF, additive regression (AR)-M5P, and AR-RF—to predict EC. Thirty-six years of observations collected by the Mazandaran Regional Water Authority were randomly divided into two sets: 70% from the period 1980 to 2008 was used as model-training data and 30% from 2009 to 2016 was used as testing data to validate the models. Several water quality variables—pH, HCO3−, Cl−, SO42−, Na+, Mg2+, Ca2+, river discharge (Q), and total dissolved solids (TDS)—were modeling inputs. Using EC and the correlation coefficients (CC) of the water quality variables, a set of nine input combinations were established. TDS, the most effective input variable, had the highest EC-CC (r = 0.91), and it was also determined to be the most important input variable among the input combinations. All models were trained and each model’s prediction power was evaluated with the testing data. Several quantitative criteria and visual comparisons were used to evaluate modeling capabilities. Results indicate that, in most cases, hybrid algorithms enhance individual algorithms’ predictive powers. The AR algorithm enhanced both M5P and RF predictions better than bagging, RS, and RC. M5P performed better than RF. Further, AR-M5P outperformed all other algorithms (R2 = 0.995, RMSE = 8.90 μs/cm, MAE = 6.20 μs/cm, NSE = 0.994 and PBIAS = −0.042). The hybridization of machine learning methods has significantly improved model performance to capture maximum salinity values, which is essential in water resource management.


Aviation ◽  
2016 ◽  
Vol 20 (2) ◽  
pp. 39-47 ◽  
Author(s):  
Panarat SRISAENG ◽  
Steven RICHARDSON ◽  
Glenn S. BAXTER ◽  
Graham WILD

This study has proposed and empirically tested for the first time Genetic Algorithm (GA) models for forecasting Australia’s domestic low cost carriers’ demand, as measured by enplaned passengers (GAPAXDE Model) and revenue passenger kilometres performed (GARPKSDE Model). Data was divided into training and testing data sets, 36 training data sets were used to estimate the weighting factors of the GA models and 6 data sets were used for testing the robustness of the GA models. The genetic algorithm parameters used in this study comprised population size (n): 1000, the generation number: 200, and mutation rate: 0.01. The modelling results have shown that both the linear GAPAXDE and GARPKSDE models are more accurate, reliable, and have a slightly greater predictive capability compared to the quadratic models. The overall mean absolute percentage error (MAPE) of the GAPAXDE and GAR-PKSDE models are 3.33 per cent and 4.48 per cent, respectively.


2019 ◽  
Vol 8 (4) ◽  
pp. 518-529
Author(s):  
Setya Adi Rahmawan ◽  
Diah Safitri ◽  
Tatik Widiharih

Fuzzy Time Series (FTS) is a time series data forecasting technique that uses fuzzy theory concepts. Forecasting systems using FTS are useful for capturing patterns of past data and then to using it to produce information in the future. Initially in the FTS each pattern of relations formed was considered to have the same weight besides using only the first order. In its development the Weighted Fuzzy Integrated Time Series (WFITS) which gave a difference in the weight of each relation and high order usage has been appeared. Measuring the accuracy of forecasting results is used the value of Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE). In this study both the first-order and high-order WFITS methods were applied to forecast rice prices in Indonesia based on data from January 2011 to December 2017. In this regard, the results of the analysis obtained data forecasting using Lee's high-order model WFITS algorithm (1,2,3) giving the value of RMSE and MAPE on the data testing in a row as many as 69,898 and 0.47% while for the RMSE and MAPE on the training data is as many as 70.4039 and 0.54%. Keywords: Fuzzy Time Series, Weighted Fuzzy Integrated Time Series, RMSE, MAPE, High-Order, Rice Prices


Sign in / Sign up

Export Citation Format

Share Document