scholarly journals Communication Cost Reduction with Partial Structure in Federated Learning

Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2081 ◽  
Author(s):  
Dongseok Kang ◽  
Chang Wook Ahn

Federated learning is a distributed learning algorithm designed to train a single server model on a server using different clients and their local data. To improve the performance of the server model, continuous communication with clients is required, and since the number of clients is very large, the algorithm must be designed in consideration of the cost required for communication. In this paper, we propose a method for distributing a model with a structure different from that of the server model, distributing a model suitable for clients with different data sizes, and training a server model using the reconstructed model trained by the client. In this way, the server model deploys only a subset of the sequential model, collects gradient updates, and selectively applies updates to the server model. This method of delivering the server model at a lower cost to clients who only need smaller models can reduce the communication cost of training server models compared to standard methods. An image classification model was designed to verify the effectiveness of the proposed method via three data distribution situations and two datasets, and it was confirmed that training was accomplished only with a cost 0.229 times smaller than the standard method.


2019 ◽  
pp. 1503-1510
Author(s):  
Felipe Santinato ◽  
Carlos Diego da Silva ◽  
Rouverson Pereira da Silva ◽  
Antônio Tassio Silva Ormond ◽  
Victor Afonso Reis Gonçalves ◽  
...  

The use of adapted harvesters for harvesting first-crop coffee requires a lower cost and exhibits a higher efficiency than manual harvesting. In view of this, the present study aimed to analyze the operational cost of mechanized harvesting of first-crop coffee. The experiment was conducted in a factorial scheme (2 × 3) + 1 and outlined in randomized blocks with five replications. There were seven treatments: two automotive harvesters (conventional and adapted) with times of operations for each harvester (1, 2 and 3 time operations) and manual harvesting. We tested these treatments in a young coffee crop planted in Catalão, GO, irrigated by Pivot, with 1.5 m of height. We measured the lost coffee, coffee harvest, remaining coffee before the operation to obtain efficient parameters and with the prices of the operations and the costs of the treatments. When operated once and three times, the adapted harvester required a lower transfer cost than the conventional harvester. Moreover, the adapted harvester showed no difference in cost between each operation. The cost reduction by mechanized harvesting varied from 23.96 to 59.9 %, depending on the frequency of the mechanized operations. In conclusion, it is efficient to harvest the young coffee with the adapted harvesters reducing the cost of coffee harvesting.



2019 ◽  
Vol 10 (1) ◽  
pp. 1-27
Author(s):  
Aniek Wijayanti

Business Process Analysis can be used to eliminate or reduce a waste cost caused by non value added activities that exist in a process. This research aims at evaluating activities carried out in the natural material procurement process in the PT XYZ, calculating the effectiveness of the process cycle, finding a way to improve the process management, and calculating the cost reduction that can achieved by activity management. A case study was the approach of this research. The researcher obtained research data throughout deep interviews with the staff who directly involved in the process, observation, and documentation of natural material procurement. The result of this study show that the effectiveness of the process cycle of natural material procurement in the factory reached as much as 87,1% for the sand material and 72% for the crushed stone. This indicates that the process still carry activities with no added value and still contain ineffective costs. Through the Business Process Mechanism, these non value added activities can be managed so that the process cycle becomes more efficient and cost effectiveness is achieved. The result of the effective cycle calculation after the management activities implementation is 100%. This means that the cost of natural material procurement process has become effective. The result of calculation of the estimated cost reduction as a result of management activity is as much as Rp249.026.635,90 per year.



2017 ◽  
Vol 1 (2) ◽  
pp. 81-107
Author(s):  
Dheny Biantara

Summarized Indonesian airline executive views on the reason for the cost problem in mayor airline andon the potential areas and measures of cost reduction in airline operation. Present an introductionsurvey where 3 executives from 3 Indonesian airlines were respondent. In the executive opinion the costproblem in mayor Indonesian airline is primarily due to fuel and oil pricing and money currency. Of thevarious function in airline maintenance was seen as least cost efficiency, whereas flight operation wasseen as an area with most potential for cost reduction. Indonesian airline had made route and fleetchanges after the beginning of 2011 to reduce cost, concludes from the analisys result havingprivatization would be an important step towards more efficient airline operation. Flexibility fromIndonesian airline regulatory would be very much welcome and the value chain concept to improveIndonesian airline having competitive adventage and cost leadership differentiation.



Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3117
Author(s):  
Junghwan Kim

Engine knock determination has been conducted in various ways for spark timing calibration. In the present study, a knock classification model was developed using a machine learning algorithm. Wavelet packet decomposition (WPD) and ensemble empirical mode decomposition (EEMD) were employed for the characterization of the in-cylinder pressure signals from the experimental engine. The WPD was used to calculate 255 features from seven decomposition levels. EEMD provided total 70 features from their intrinsic mode functions (IMF). The experimental engine was operated at advanced spark timings to induce knocking under various engine speeds and load conditions. Three knock intensity metrics were employed to determine that the dataset included 4158 knock cycles out of a total of 66,000 cycles. The classification model trained with 66,000 cycles achieved an accuracy of 99.26% accuracy in the knock cycle detection. The neighborhood component analysis revealed that seven features contributed significantly to the classification. The classification model retrained with the seven significant features achieved an accuracy of 99.02%. Although the misclassification rate increased in the normal cycle detection, the feature selection decreased the model size from 253 to 8.25 MB. Finally, the compact classification model achieved an accuracy of 99.95% with the second dataset obtained at the knock borderline (KBL) timings, which validates that the model is sufficient for the KBL timing determination.



2021 ◽  
Vol 7 (7) ◽  
pp. 105
Author(s):  
Guillaume Reichert ◽  
Ali Bellamine ◽  
Matthieu Fontaine ◽  
Beatrice Naipeanu ◽  
Adrien Altar ◽  
...  

The growing need for emergency imaging has greatly increased the number of conventional X-rays, particularly for traumatic injury. Deep learning (DL) algorithms could improve fracture screening by radiologists and emergency room (ER) physicians. We used an algorithm developed for the detection of appendicular skeleton fractures and evaluated its performance for detecting traumatic fractures on conventional X-rays in the ER, without the need for training on local data. This algorithm was tested on all patients (N = 125) consulting at the Louis Mourier ER in May 2019 for limb trauma. Patients were selected by two emergency physicians from the clinical database used in the ER. Their X-rays were exported and analyzed by a radiologist. The prediction made by the algorithm and the annotation made by the radiologist were compared. For the 125 patients included, 25 patients with a fracture were identified by the clinicians, 24 of whom were identified by the algorithm (sensitivity of 96%). The algorithm incorrectly predicted a fracture in 14 of the 100 patients without fractures (specificity of 86%). The negative predictive value was 98.85%. This study shows that DL algorithms are potentially valuable diagnostic tools for detecting fractures in the ER and could be used in the training of junior radiologists.



2021 ◽  
Vol 13 (15) ◽  
pp. 2935
Author(s):  
Chunhua Qian ◽  
Hequn Qiang ◽  
Feng Wang ◽  
Mingyang Li

Building a high-precision, stable, and universal automatic extraction model of the rocky desertification information is the premise for exploring the spatiotemporal evolution of rocky desertification. Taking Guizhou province as the research area and based on MODIS and continuous forest inventory data in China, we used a machine learning algorithm to build a rocky desertification model with bedrock exposure rate, temperature difference, humidity, and other characteristic factors and considered improving the model accuracy from the spatial and temporal dimensions. The results showed the following: (1) The supervised classification method was used to build a rocky desertification model, and the logical model, RF model, and SVM model were constructed separately. The accuracies of the models were 73.8%, 78.2%, and 80.6%, respectively, and the kappa coefficients were 0.61, 0.672, and 0.707, respectively. SVM performed the best. (2) Vegetation types and vegetation seasonal phases are closely related to rocky desertification. After combining them, the model accuracy and kappa coefficient improved to 91.1% and 0.861. (3) The spatial distribution characteristics of rocky desertification in Guizhou are obvious, showing a pattern of being heavy in the west, light in the east, heavy in the south, and light in the north. Rocky desertification has continuously increased from 2001 to 2019. In conclusion, combining the vertical spatial structure of vegetation and the differences in seasonal phase is an effective method to improve the modeling accuracy of rocky desertification, and the SVM model has the highest rocky desertification classification accuracy. The research results provide data support for exploring the spatiotemporal evolution pattern of rocky desertification in Guizhou.



2021 ◽  
Vol 15 (3) ◽  
pp. 1-28
Author(s):  
Xueyan Liu ◽  
Bo Yang ◽  
Hechang Chen ◽  
Katarzyna Musial ◽  
Hongxu Chen ◽  
...  

Stochastic blockmodel (SBM) is a widely used statistical network representation model, with good interpretability, expressiveness, generalization, and flexibility, which has become prevalent and important in the field of network science over the last years. However, learning an optimal SBM for a given network is an NP-hard problem. This results in significant limitations when it comes to applications of SBMs in large-scale networks, because of the significant computational overhead of existing SBM models, as well as their learning methods. Reducing the cost of SBM learning and making it scalable for handling large-scale networks, while maintaining the good theoretical properties of SBM, remains an unresolved problem. In this work, we address this challenging task from a novel perspective of model redefinition. We propose a novel redefined SBM with Poisson distribution and its block-wise learning algorithm that can efficiently analyse large-scale networks. Extensive validation conducted on both artificial and real-world data shows that our proposed method significantly outperforms the state-of-the-art methods in terms of a reasonable trade-off between accuracy and scalability. 1



2020 ◽  
Vol 11 (1) ◽  
pp. 96
Author(s):  
Wen-Lan Wu ◽  
Meng-Hua Lee ◽  
Hsiu-Tao Hsu ◽  
Wen-Hsien Ho ◽  
Jing-Min Liang

Background: In this study, an automatic scoring system for the functional movement screen (FMS) was developed. Methods: Thirty healthy adults fitted with full-body inertial measurement unit sensors completed six FMS exercises. The system recorded kinematics data, and a professional athletic trainer graded each participant. To reduce the number of input variables for the predictive model, ordinal logistic regression was used for subset feature selection. The ensemble learning algorithm AdaBoost.M1 was used to construct classifiers. Accuracy and F score were used for classification model evaluation. The consistency between automatic and manual scoring was assessed using a weighted kappa statistic. Results: When all the features were used, the predict model presented moderate to high accuracy, with kappa values between fair to very good agreement. After feature selection, model accuracy decreased about 10%, with kappa values between poor to moderate agreement. Conclusions: The results indicate that higher prediction accuracy was achieved using the full feature set compared with using the reduced feature set.



2014 ◽  
Vol 665 ◽  
pp. 643-646
Author(s):  
Ying Liu ◽  
Yan Ye ◽  
Chun Guang Li

Metalearning algorithm learns the base learning algorithm, targeted for improving the performance of the learning system. The incremental delta-bar-delta (IDBD) algorithm is such a metalearning algorithm. On the other hand, sparse algorithms are gaining popularity due to their good performance and wide applications. In this paper, we propose a sparse IDBD algorithm by taking the sparsity of the systems into account. Thenorm penalty is contained in the cost function of the standard IDBD, which is equivalent to adding a zero attractor in the iterations, thus can speed up convergence if the system of interest is indeed sparse. Simulations demonstrate that the proposed algorithm is superior to the competing algorithms in sparse system identification.



Sign in / Sign up

Export Citation Format

Share Document