scholarly journals Network-based pricing for 3D printing services in two-sided manufacturing-as-a-service marketplace

2020 ◽  
Vol 26 (1) ◽  
pp. 82-88 ◽  
Author(s):  
Deepak Pahwa ◽  
Binil Starly

Purpose This paper presents approaches to determine a network-based pricing for 3D printing services in the context of a two-sided manufacturing-as-a-service marketplace. The purpose of this study is to provide cost analytics to enable service bureaus to better compete in the market by moving away from setting ad hoc and subjective prices. Design/methodology/approach A data mining approach with machine learning methods is used to estimate a price range based on the profile characteristics of 3D printing service suppliers. The model considers factors such as supplier experience, supplier capabilities, customer reviews and ratings from past orders and scale of operations, among others, to estimate a price range for suppliers’ services. Data were gathered from existing marketplace websites, which were then used to train and test the model. Findings The model demonstrates an accuracy of 65 per cent for US-based suppliers and 59 per cent for Europe-based suppliers to classify a supplier’s 3D printer listing in one of the seven price categories. The improvement over baseline accuracy of 25 per cent demonstrates that machine learning-based methods are promising for network-based pricing in manufacturing marketplaces Originality/value Conventional methodologies for pricing services through activity-based costing are inefficient in strategically priced 3-D printing service offering in a connected marketplace. As opposed to arbitrarily determining prices, this work proposes an approach to determine prices through data mining methods to estimate competitive prices. Such tools can be built into online marketplaces to help independent service bureaus to determine service price rates.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Satoko Hiura ◽  
Shige Koseki ◽  
Kento Koyama

AbstractIn predictive microbiology, statistical models are employed to predict bacterial population behavior in food using environmental factors such as temperature, pH, and water activity. As the amount and complexity of data increase, handling all data with high-dimensional variables becomes a difficult task. We propose a data mining approach to predict bacterial behavior using a database of microbial responses to food environments. Listeria monocytogenes, which is one of pathogens, population growth and inactivation data under 1,007 environmental conditions, including five food categories (beef, culture medium, pork, seafood, and vegetables) and temperatures ranging from 0 to 25 °C, were obtained from the ComBase database (www.combase.cc). We used eXtreme gradient boosting tree, a machine learning algorithm, to predict bacterial population behavior from eight explanatory variables: ‘time’, ‘temperature’, ‘pH’, ‘water activity’, ‘initial cell counts’, ‘whether the viable count is initial cell number’, and two types of categories regarding food. The root mean square error of the observed and predicted values was approximately 1.0 log CFU regardless of food category, and this suggests the possibility of predicting viable bacterial counts in various foods. The data mining approach examined here will enable the prediction of bacterial population behavior in food by identifying hidden patterns within a large amount of data.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Roberto Salazar-Reyna ◽  
Fernando Gonzalez-Aleu ◽  
Edgar M.A. Granda-Gutierrez ◽  
Jenny Diaz-Ramirez ◽  
Jose Arturo Garza-Reyes ◽  
...  

PurposeThe objective of this paper is to assess and synthesize the published literature related to the application of data analytics, big data, data mining and machine learning to healthcare engineering systems.Design/methodology/approachA systematic literature review (SLR) was conducted to obtain the most relevant papers related to the research study from three different platforms: EBSCOhost, ProQuest and Scopus. The literature was assessed and synthesized, conducting analysis associated with the publications, authors and content.FindingsFrom the SLR, 576 publications were identified and analyzed. The research area seems to show the characteristics of a growing field with new research areas evolving and applications being explored. In addition, the main authors and collaboration groups publishing in this research area were identified throughout a social network analysis. This could lead new and current authors to identify researchers with common interests on the field.Research limitations/implicationsThe use of the SLR methodology does not guarantee that all relevant publications related to the research are covered and analyzed. However, the authors' previous knowledge and the nature of the publications were used to select different platforms.Originality/valueTo the best of the authors' knowledge, this paper represents the most comprehensive literature-based study on the fields of data analytics, big data, data mining and machine learning applied to healthcare engineering systems.


Facilities ◽  
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Cheng Zhang ◽  
Zehao Ye

Purpose Owing to the consumption of considerable resources in developing physical pipe prediction models and the fact that the statistical models cannot fit the failure records perfectly, the purpose of this paper is to use data mining method to analyze and predict the risks of water pipe failure via considering attributes and location of pipes in historical failure records. One of the Automatized Machine Learning (AutoML) methods, tree-based pipeline optimization technique (TPOT) was used as the key data mining technique in this research. Design/methodology/approach By considering pipeline attributes, environmental factors and historical pipeline broke/breaks records, a water pipeline failure prediction method is proposed in this research. Regression analysis, genetic algorithm, machine learning, data mining approaches are used to analyze and predict the probability of pipeline failure. TPOT was used as the key data mining technique. A case study was carried out in a specific area in China to investigate the relationships between pipeline broke/breaks and relevant parameters, such as pipeline age, materials, diameter, pipeline density and so on. Findings By integrating the prediction models for individual pipelines and small research regions, a prediction model is developed to describe the probability of water pipe failures and validated by real data. A high fitting degree is achieved, which means a good potential of using the proposed method in reality as a guideline for identifying areas with high risks and taking proactive measures and optimizing the resources allocation for water supply companies. Originality/value Different models are developed to have better prediction on regional or individual pipeline. A comparison between the predicted values with real records has shown that a preliminary model has a good potential in predicting the future failure risks.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mirpouya Mirmozaffari ◽  
Elham Shadkam ◽  
Seyyed Mohammad Khalili ◽  
Kamyar Kabirifar ◽  
Reza Yazdani ◽  
...  

Purpose Cement as one of the major components of construction activities, releases a tremendous amount of carbon dioxide (CO2) into the atmosphere, resulting in adverse environmental impacts and high energy consumption. Increasing demand for CO2 consumption has urged construction companies and decision-makers to consider ecological efficiency affected by CO2 consumption. Therefore, this paper aims to develop a method capable of analyzing and assessing the eco-efficiency determining factor in Iran’s 22 local cement companies over 2015–2019. Design/methodology/approach This research uses two well-known artificial intelligence approaches, namely, optimization data envelopment analysis (DEA) and machine learning algorithms at the first and second steps, respectively, to fulfill the research aim. Meanwhile, to find the superior model, the CCR model, BBC model and additive DEA models to measure the efficiency of decision processes are used. A proportional decreasing or increasing of inputs/outputs is the main concern in measuring efficiency which neglect slacks, and hence, is a critical limitation of radial models. Thus, the additive model by considering desirable and undesirable outputs, as a well-known DEA non-proportional and non-radial model, is used to solve the problem. Additive models measure efficiency via slack variables. Considering both input-oriented and output-oriented is one of the main advantages of the additive model. Findings After applying the proposed model, the Malmquist productivity index is computed to evaluate the productivity of companies over 2015–2019. Although DEA is an appreciated method for evaluating, it fails to extract unknown information. Thus, machine learning algorithms play an important role in this step. Association rules are used to extract hidden rules and to introduce the three strongest rules. Finally, three data mining classification algorithms in three different tools have been applied to introduce the superior algorithm and tool. A new converting two-stage to single-stage model is proposed to obtain the eco-efficiency of the whole system. This model is proposed to fix the efficiency of a two-stage process and prevent the dependency on various weights. Converting undesirable outputs and desirable inputs to final desirable inputs in a single-stage model to minimize inputs, as well as turning desirable outputs to final desirable outputs in the single-stage model to maximize outputs to have a positive effect on the efficiency of the whole process. Originality/value The performance of the proposed approach provides us with a chance to recognize pattern recognition of the whole, combining DEA and data mining techniques during the selected period (five years from 2015 to 2019). Meanwhile, the cement industry is one of the foremost manufacturers of naturally harmful material using an undesirable by-product; specific stress is given to that pollution control investment or undesirable output while evaluating energy use efficiency. The significant concentration of the study is to respond to five preliminary questions.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Hannan Amoozad Mahdiraji ◽  
Madjid Tavana ◽  
Pouya Mahdiani ◽  
Ali Asghar Abbasi Kamardi

PurposeCustomer differences and similarities play a crucial role in service operations, and service industries need to develop various strategies for different customer types. This study aims to understand the behavioral pattern of customers in the banking industry by proposing a hybrid data mining approach with rule extraction and service operation benchmarking.Design/methodology/approachThe authors analyze customer data to identify the best customers using a modified recency, frequency and monetary (RFM) model and K-means clustering. The number of clusters is determined with a two-step K-means quality analysis based on the Silhouette, Davies–Bouldin and Calinski–Harabasz indices and the evaluation based on distance from average solution (EDAS). The best–worst method (BWM) and the total area based on orthogonal vectors (TAOV) are used next to sort the clusters. Finally, the associative rules and the Apriori algorithm are used to derive the customers' behavior patterns.FindingsAs a result of implementing the proposed approach in the financial service industry, customers were segmented and ranked into six clusters by analyzing 20,000 records. Furthermore, frequent customer financial behavior patterns were recognized based on demographic characteristics and financial transactions of customers. Thus, customer types were classified as highly loyal, loyal, high-interacting, low-interacting and missing customers. Eventually, appropriate strategies for interacting with each customer type were proposed.Originality/valueThe authors propose a novel hybrid multi-attribute data mining approach for rule extraction and the service operations benchmarking approach by combining data mining tools with a multilayer decision-making approach. The proposed hybrid approach has been implemented in a large-scale problem in the financial services industry.


2019 ◽  
Vol 5 (2) ◽  
pp. 108-119
Author(s):  
Yeslam Al-Saggaf ◽  
Amanda Davies

Purpose The purpose of this paper is to discuss the design, application and findings of a case study in which the application of a machine learning algorithm is utilised to identify the grievances in Twitter in an Arabian context. Design/methodology/approach To understand the characteristics of the Twitter users who expressed the identified grievances, data mining techniques and social network analysis were utilised. The study extracted a total of 23,363 tweets and these were stored as a data set. The machine learning algorithm applied to this data set was followed by utilising a data mining process to explore the characteristics of the Twitter feed users. The network of the users was mapped and the individual level of interactivity and network density were calculated. Findings The machine learning algorithm revealed 12 themes all of which were underpinned by the coalition of Arab countries blockade of Qatar. The data mining analysis revealed that the tweets could be clustered in three clusters, the main cluster included users with a large number of followers and friends but who did not mention other users in their tweets. The social network analysis revealed that whilst a large proportion of users engaged in direct messages with others, the network ties between them were not registered as strong. Practical implications Borum (2011) notes that invoking grievances is the first step in the radicalisation process. It is hoped that by understanding these grievances, the study will shed light on what radical groups could invoke to win the sympathy of aggrieved people. Originality/value In combination, the machine learning algorithm offered insights into the grievances expressed within the tweets in an Arabian context. The data mining and the social network analyses revealed the characteristics of the Twitter users highlighting identifying and managing early intervention of radicalisation.


2015 ◽  
Vol 25 (3) ◽  
pp. 416-434 ◽  
Author(s):  
Shintaro Okazaki ◽  
Ana M. Díaz-Martín ◽  
Mercedes Rozano ◽  
Héctor David Menéndez-Benito

Purpose – The purpose of this paper is to explore customer engagement in Twitter via data mining. Design/methodology/approach – This study’s intended contributions are twofold: to find a clear connection among customer engagement, presumption, and Web 2.0 in a context of service-dominant (S-D) logic; and to identify social networks created by prosumers. To this end, the study employed data mining techniques. Tweets about IKEA were used as a sample. The resulting algorithm based on 300 tweets was applied to 4,000 tweets to identify the patterns of electronic word-of-mouth (eWOM). Findings – Social networks created in IKEA’s tweets consist of three forms of eWOM: objective statements, subjective statements, and knowledge sharing. Most objective statements are disseminated from satisfied or neutral customers, while subjective statements are disseminated from dissatisfied or neutral customers. Satisfied customers mainly carry out knowledge sharing, which seems to reflect presumption behavior. Research limitations/implications – This study provides partial evidence of customer engagement and presumption in IKEA’s tweets. The results indicate that there are three forms of eWOM in the networks: objective statements, subjective statements, and knowledge sharing. It seems that IKEA successfully engaged customers in knowledge sharing, while negative opinions were mainly disseminated in a limited circle. Practical implications – Firms should make more of an effort to identify prosumers via data mining, since these networks are hidden behind “self-proclaimed” followers. Prosumers differ from opinion leaders, since they actively participate in product development. Thus, firms should seek prosumers in order to more closely fit their products to consumer needs. As a practical strategy, firms could employ celebrities for promotional purposes and use them as a platform to convert their followers to prosumers. In addition, firms are encouraged to make public how they resolve problematic customer complaints so that customers can feel they are a part of firms’ service development. Originality/value – Theoretically, the study makes unique contributions by offering a synergic framework of S-D logic and Web 2.0. The conceptual framework collectively relates customer engagement, presumption, and Web 2.0 to social networks. In addition, the idea of examining social networks based on different forms of eWOM has seldom been touched in the literature. Methodologically, the study employed seven algorithms to choose the most robust model, which was later applied to 4,000 tweets.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiake Fu ◽  
Huijing Tian ◽  
Lingguang Song ◽  
Mingchao Li ◽  
Shuo Bai ◽  
...  

PurposeThis paper presents a new approach of productivity estimation of cutter suction dredger operation through data mining and learning from real-time big data.Design/methodology/approachThe paper used big data, data mining and machine learning techniques to extract features of cutter suction dredgers (CSD) for predicting its productivity. ElasticNet-SVR (Elastic Net-Support Vector Machine) method is used to filter the original monitoring data. Along with the actual working conditions of CSD, 15 features were selected. Then, a box plot was used to clean the corresponding data by filtering out outliers. Finally, four algorithms, namely SVR (Support Vector Regression), XGBoost (Extreme Gradient Boosting), LSTM (Long-Short Term Memory Network) and BP (Back Propagation) Neural Network, were used for modeling and testing.FindingsThe paper provided a comprehensive forecasting framework for productivity estimation including feature selection, data processing and model evaluation. The optimal coefficient of determination (R2) of four algorithms were all above 80.0%, indicating that the features selected were representative. Finally, the BP neural network model coupled with the SVR model was selected as the final model.Originality/valueMachine-learning algorithm incorporating domain expert judgments was used to select predictive features. The final optimal coefficient of determination (R2) of the coupled model of BP neural network and SVR is 87.6%, indicating that the method proposed in this paper is effective for CSD productivity estimation.


2021 ◽  
pp. 153-165
Author(s):  
Anshul Mishra ◽  
M. H. Khan ◽  
Waris Khan ◽  
Mohammad Zunnun Khan ◽  
Nikhil Kumar Srivastava

Sign in / Sign up

Export Citation Format

Share Document