Managing return flow of end-of-life products for product recovery operations

Author(s):  
Seval Ene ◽  
Nursel Öztürk

Increased consciousness on environment and sustainability, leads companies to apply environmentally friendly strategies such as product recovery and product return management. These strategies are generally applied in reverse logistics concept. Implementing reverse logistics successfully becomes complicated for companies due to uncertain parameters of the system like quantity, quality and timing of returns. A forecasting methodology is required to overcome these uncertainties and manage product returns. Accurate forecasting of product return flows provides insights to managers of reverse logistics. This paper proposes a forecasting model based on grey modelling for managing end-of-life products’ return flow. Grey models are capable for handling data sets characterized by uncertainty and small sized. The proposed model is applied to data set of a specific end-of-life product. Attained results show that the proposed forecasting model can be successfully used as a forecasting tool for product returns and a supportive guidance can be provided for future planning. Keywords: End-of-life products, grey modelling, product return flow, product recovery; 

Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2012 ◽  
Vol 263-266 ◽  
pp. 2173-2178
Author(s):  
Xin Guang Li ◽  
Min Feng Yao ◽  
Li Rui Jian ◽  
Zhen Jiang Li

A probabilistic neural network (PNN) speech recognition model based on the partition clustering algorithm is proposed in this paper. The most important advantage of PNN is that training is easy and instantaneous. Therefore, PNN is capable of dealing with real time speech recognition. Besides, in order to increase the performance of PNN, the selection of data set is one of the most important issues. In this paper, using the partition clustering algorithm to select data is proposed. The proposed model is tested on two data sets from the field of spoken Arabic numbers, with promising results. The performance of the proposed model is compared to single back propagation neural network and integrated back propagation neural network. The final comparison result shows that the proposed model performs better than the other two neural networks, and has an accuracy rate of 92.41%.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4408 ◽  
Author(s):  
Hyun-Myung Cho ◽  
Heesu Park ◽  
Suh-Yeon Dong ◽  
Inchan Youn

The goals of this study are the suggestion of a better classification method for detecting stressed states based on raw electrocardiogram (ECG) data and a method for training a deep neural network (DNN) with a smaller data set. We suggest an end-to-end architecture to detect stress using raw ECGs. The architecture consists of successive stages that contain convolutional layers. In this study, two kinds of data sets are used to train and validate the model: A driving data set and a mental arithmetic data set, which smaller than the driving data set. We apply a transfer learning method to train a model with a small data set. The proposed model shows better performance, based on receiver operating curves, than conventional methods. Compared with other DNN methods using raw ECGs, the proposed model improves the accuracy from 87.39% to 90.19%. The transfer learning method improves accuracy by 12.01% and 10.06% when 10 s and 60 s of ECG signals, respectively, are used in the model. In conclusion, our model outperforms previous models using raw ECGs from a small data set and, so, we believe that our model can significantly contribute to mobile healthcare for stress management in daily life.


2017 ◽  
Vol 73 (3) ◽  
pp. 481-499 ◽  
Author(s):  
Amed Leiva-Mederos ◽  
Jose A. Senso ◽  
Yusniel Hidalgo-Delgado ◽  
Pedro Hipola

Purpose Information from Current Research Information Systems (CRIS) is stored in different formats, in platforms that are not compatible, or even in independent networks. It would be helpful to have a well-defined methodology to allow for management data processing from a single site, so as to take advantage of the capacity to link disperse data found in different systems, platforms, sources and/or formats. Based on functionalities and materials of the VLIR project, the purpose of this paper is to present a model that provides for interoperability by means of semantic alignment techniques and metadata crosswalks, and facilitates the fusion of information stored in diverse sources. Design/methodology/approach After reviewing the state of the art regarding the diverse mechanisms for achieving semantic interoperability, the paper analyzes the following: the specific coverage of the data sets (type of data, thematic coverage and geographic coverage); the technical specifications needed to retrieve and analyze a distribution of the data set (format, protocol, etc.); the conditions of re-utilization (copyright and licenses); and the “dimensions” included in the data set as well as the semantics of these dimensions (the syntax and the taxonomies of reference). The semantic interoperability framework here presented implements semantic alignment and metadata crosswalk to convert information from three different systems (ABCD, Moodle and DSpace) to integrate all the databases in a single RDF file. Findings The paper also includes an evaluation based on the comparison – by means of calculations of recall and precision – of the proposed model and identical consultations made on Open Archives Initiative and SQL, in order to estimate its efficiency. The results have been satisfactory enough, due to the fact that the semantic interoperability facilitates the exact retrieval of information. Originality/value The proposed model enhances management of the syntactic and semantic interoperability of the CRIS system designed. In a real setting of use it achieves very positive results.


2001 ◽  
Vol 12 (5) ◽  
pp. 534-547 ◽  
Author(s):  
Neil Ferguson ◽  
Jim Browne

Author(s):  
Ioana Olariu

This article is a theoretical approach on retail reverse logistics. Environmental concern and the current marketing strategy have spurred retailers to implement strategies to facilitate product returns from end customers. Reverse logistics, indicating the process of this return flow, encompasses such activities as the movement of returned products, facilities to accommodate returned items, and overall remedy process for returned items. The retail industry, under great competitive pressure, has used return policies as a competitive weapon. Grocery retailers were the first to begin to focus serious attention on the problem of returns and to develop reverse logistics innovations. Grocery retailers first developed innovations such as reclamation centers. Reclamation centers, in turn, led to the establishment of centralized return centers. Centralizing returns has led to significant benefits for most firms that have implemented them. Over the last several years, retailers have consolidated. Now, more than ever, reverse logistics is seen as being important. This reverse distribution activity can be crucial to the survival of companies, because the permanent goodwill of the company is at stake. Businesses succeed because they respond to both external and internal changes and adjust in an effective manner to remain competitive.


Author(s):  
Yada Zhu ◽  
Jianbo Li ◽  
Jingrui He ◽  
Brian L. Quanz ◽  
Ajay A. Deshpande

With the rapid growth of e-tail, the cost to handle returned online orders also increases significantly and has become a major challenge in the e-commerce industry. Accurate prediction of product returns allows e-tailers to prevent problematic transactions in advance. However, the limited existing work for modeling customer online shopping behaviors and predicting their return actions fail to integrate the rich information in the product purchase and return history (e.g., return history, purchase-no-return behavior, and customer/product similarity). Furthermore, the large-scale data sets involved in this problem, typically consisting of millions of customers and tens of thousands of products, also render existing methods inefficient and ineffective at predicting the product returns. To address these problems, in this paper, we propose to use a weighted hybrid graph to represent the rich information in the product purchase and return history, in order to predict product returns. The proposed graph consists of both customer nodes and product nodes, undirected edges reflecting customer return history and customer/product similarity based on their attributes, as well as directed edges discriminating purchase-no-return and no-purchase actions. Based on this representation, we study a random-walk-based local algorithm for predicting product return propensity for each customer, whose computational complexity depends only on the size of the output cluster rather than the entire graph. Such a property makes the proposed local algorithm particularly suitable for processing the large-scale data sets to predict product returns. To test the performance of the proposed techniques, we evaluate the graph model and algorithm on multiple e-commerce data sets, showing improved performance over state-of-the-art methods.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 2094
Author(s):  
Hashem Alyami ◽  
Abdullah Alharbi ◽  
Irfan Uddin

Deep Learning algorithms are becoming common in solving different supervised and unsupervised learning problems. Different deep learning algorithms were developed in last decade to solve different learning problems in different domains such as computer vision, speech recognition, machine translation, etc. In the research field of computer vision, it is observed that deep learning has become overwhelmingly popular. In solving computer vision related problems, we first take a CNN (Convolutional Neural Network) which is trained from scratch or some times a pre-trained model is taken and further fine-tuned based on the dataset that is available. The problem of training the model from scratch on new datasets suffers from catastrophic forgetting. Which means that when a new dataset is used to train the model, it forgets the knowledge it has obtained from an existing dataset. In other words different datasets does not help the model to increase its knowledge. The problem with the pre-trained models is that mostly CNN models are trained on open datasets, where the data set contains instances from specific regions. This results into predicting disturbing labels when the same model is used for instances of datasets collected in a different region. Therefore, there is a need to find a solution on how to reduce the gap of Geo-diversity in different computer vision problems in developing world. In this paper, we explore the problems of models that were trained from scratch along with models which are pre-trained on a large dataset, using a dataset specifically developed to understand the geo-diversity issues in open datasets. The dataset contains images of different wedding scenarios in South Asian countries. We developed a Lifelong CNN that can incrementally increase knowledge i.e., the CNN learns labels from the new dataset but includes the existing knowledge of open data sets. The proposed model demonstrates highest accuracy compared to models trained from scratch or pre-trained model.


2021 ◽  
Author(s):  
Mehrnaz Ahmadi ◽  
Mehdi Khashei

Abstract Support vector machines (SVMs) are one of the most popular and widely-used approaches in modeling. Various kinds of SVM models have been developed in the literature of prediction and classification in order to cover different purposes. Fuzzy and crisp support vector machines are a well-known branch of modeling approaches that frequently applied for certain and uncertain modeling, respectively. However, each of these models can only be efficiently used in its specified domain and cannot yield appropriate and accurate results if the opposite situations have occurred. While the real-world systems and data sets often contain both certain and uncertain patterns that are complicatedly mixed together and need to be simultaneously modeled. In this paper, a generalized support vector machine (GSVM) is proposed that can simultaneously benefit the unique advantages of certain and uncertain versions of the traditional support vector machines in their own specialized categories. In the proposed model, the underlying data set is first categorized into two classes of certain and uncertain patterns. Then, certain patterns are modeled by a support vector machine, and uncertain patterns are modeled by a fuzzy support vector machine. After that, the function of the relationship, as well as the relative importance of each component, are estimated by another support vector machine, and subsequently, the final forecasts of the proposed model are calculated. Empirical results of wind speed forecasting indicate that the proposed method not only can achieve more accurate results than support vector machines (SVMs) and fuzzy support vector machines (FSVMs) but also can yield better forecasting performance than traditional fuzzy and nonfuzzy single models and traditional preprocessing-based hybrid models of SVMs.


2018 ◽  
Vol 14 (2) ◽  
pp. 45-57
Author(s):  
Adil Rashid ◽  
Tariq Rashid Jan ◽  
Akhtar Hussain Bhat ◽  
Z. Ahmad

Abstract There are diverse lifetime models available to the researchers to predict the uncertain behavior of random events but at times they fail to provide adequate fit for some complex and new data sets. New probability distributions are emerging as lifetime models to meet this ever growing demand of modeling complex real world phenomena from different sciences with better efficiency. Here, in this manuscript we shall compose Ailamujia distribution with that of power series distribution. This newly developed distribution called Ailamujia power series distribution reduces to four new special lifetime models on simple specific function parametric setting. Apart from this some important mathematical properties in the form of propositions will also be discussed. Furthermore, characterization and some statistical properties that include mgf, moments, and parameter estimation have also been discussed. Finally, the potency of newly proposed model has been analyzed statistically and graphically and it has been established from the statistical analysis that newly proposed model offers a better fit when it comes to model some lifetime data set.


Sign in / Sign up

Export Citation Format

Share Document