Diagnosis and economic impact of operational variability – a case from the chemical forest industry

2015 ◽  
Vol 21 (3) ◽  
pp. 294-309
Author(s):  
Jukka Mikael Rantamäki ◽  
Olli Saarela

Purpose – This paper deals with the identification and diagnosis of operational variability in chemical processes, which is a common problem in mills but little explored in literature. The Cross-Industry Standard Process for Data Mining (CRISP-DM) is a widely used approach in problem solving. The purpose of this paper is to: first, contribute to the body of knowledge on applying CRISP-DM in a pulp mill production process and the special issues that need to be considered in this context. Exact amounts of a cost increase due to variation in pulp production have not been reported previously. Second, to quantify the cost of variation. Design/methodology/approach – In the case studied, the variation in a pulp mill batch cooking process had increased. In order to identify the causes of variation, CRISP-DM was applied. Findings – The cycle of variation was identified and found to be related to the batch cooking process cycle time. By using information from this analysis it was possible to detect otherwise unobserved defective steam nozzles. The defective equipment was repaired and improved. Further improvement was achieved when the fouling of a heat exchanger was found by analysis to be the root cause of long-term variability parameters. By applying CRISP-DM, equipment defects and fouling were identified as the root causes of the higher manufacturing costs due to increased variation were detected and estimated. The Taguchi loss function is a possible tool for estimating the cost of variation in pulp manufacturing. Originality/value – This paper provides new knowledge in the context of implementing CRISP-DM and the Taguchi loss function in the pulp and paper manufacturing process.

2018 ◽  
Vol 11 (2) ◽  
pp. 233-253 ◽  
Author(s):  
Agung Sutrisno ◽  
Indra Gunawan ◽  
Iwan Vanany ◽  
Mohammad Asjad ◽  
Wahyu Caesarendra

Purpose Proposing an improved model for evaluating criticality of non-value added (waste) in operation is necessary for realizing sustainable manufacturing practices. The purpose of this paper is concerning on improvement of the decision support model for evaluating risk criticality lean waste occurrence by considering the weight of modified FMEA indices and the influence of waste-worsening factors causing the escalation of waste risk magnitude. Design/methodology/approach Integration of entropy and Taguchi loss function into decision support model of modified FMEA is presented to rectify the limitation of previous risk reprioritization models in modified FMEA studies. The weight of the probability components and loss components are quantified using entropy. A case study from industry is used to test the applicability of the integration model in practical situation. Findings The proposed model enables to overcome the limitations of using subjective determination on the weight of modified FMEA indices. The inclusion of the waste-worsening factors and Taguchi loss functions enables the FMEA team to articulate the severity level of waste consequences appropriately over the use of ordinal scale in ranking the risk of lean waste in modified FMEA references. Research limitations/implications When appraising the risk of lean waste criticality, ignorance on weighting of FMEA indices may be inappropriate for an accurate risk-based decision-making. This paper provides insights to scholars and practitioners and others concerned with the lean operation to understand the significance of considering the impact of FMEA indices and waste-worsening factors in evaluating criticality of lean waste risks. Practical implications The method adopted is for quantifying the criticality of lean waste and inclusion of weighting of FMEA indices in modified FMEA provides insight and exemplar on tackling the risk of lean waste and determining the most critical waste affecting performability of company operations. Originality/value Integration of the entropy and Taguchi loss function for appraising the criticality of lean waste in modified FMEA is the first in the lean management discipline. These findings will be highly useful for professionals wishing to implement the lean waste reduction strategy.


2015 ◽  
Vol 22 (7) ◽  
pp. 1281-1300 ◽  
Author(s):  
Satyendra Kumar Sharma ◽  
Vinod Kumar

Purpose – Selection of logistics service provider (LSP) (also known as Third-party logistics (3PL) is a critical decision, because logistics affects top and bottom line as well. Companies consider logistics as a cost driver and at the time of LSP selection decision, many important decision criteria’s are left out. 3PL selection is multi-criteria decision-making process. The purpose of this paper is to develop an integrated approach, combining quality function deployment (QFD), and Taguchi loss function (TLF) to select optimal 3PL. Design/methodology/approach – Multiple criteria are derived from the company requirements using house of quality. The 3PL service attributes are developed using QFD and the relative importance of the attributes are assessed. TLFs are used to measure performance of each 3PL on each decision variable. Composite weighted loss scores are used to rank 3PLs. Findings – QFD is a better tool which connects attributes used in a decision problem to decision maker’s requirements. In total, 15 criteria were used and TLF provides performance on these criteria. Practical implications – The proposed model provides a methodology to make informed decision related to 3PL selection. The proposed model may be converted into decision support system. Originality/value – Proposed approach in this paper is a novel approach that connects the 3PL selection problem to practice in terms of identifying criteria’s and provides a single numerical value in terms of Taghui loss.


2019 ◽  
Vol 36 (4) ◽  
pp. 526-551 ◽  
Author(s):  
Mohammad Hosein Nadreri ◽  
Mohamad Bameni Moghadam ◽  
Asghar Seif

PurposeThe purpose of this paper is to develop an economic statistical design based on the concepts of adjusted average time to signal (AATS) andANFforX¯control chart under a Weibull shock model with multiple assignable causes.Design/methodology/approachThe design used in this study is based on a multiple assignable causes cost model. The new proposed cost model is compared with the same cost and time parameters and optimal design parameters under uniform and non-uniform sampling schemes.FindingsNumerical results indicate that the cost model with non-uniform sampling cost has a lower cost than that with uniform sampling. By using sensitivity analysis, the effect of changing fixed and variable parameters of time, cost and Weibull distribution parameters on the optimum values of design parameters and loss cost is examined and discussed.Practical implicationsThis research adds to the body of knowledge relating to the quality control of process monitoring systems. This paper may be of particular interest to practitioners of quality systems in factories where multiple assignable causes affect the production process.Originality/valueThe cost functions for uniform and non-uniform sampling schemes are presented based on multiple assignable causes withAATSandANFconcepts for the first time.


2014 ◽  
Vol 20 (2) ◽  
pp. 122-134 ◽  
Author(s):  
Kevin M. Taaffe ◽  
Robert William Allen ◽  
Lindsey Grigg

Purpose – Performance measurements or metrics are that which measure a company's performance and behavior, and are used to help an organization achieve and maintain success. Without the use of performance metrics, it is difficult to know whether or not the firm is meeting requirements or making desired improvements. During the course of this study with Lockheed Martin, the research team was tasked with determining the effectiveness of the site's existing performance metrics that are used to help an organization achieve and maintain success. Without the use of performance metrics, it is difficult to know whether or not the firm is meeting requirements or making desired improvements. The paper aims to discuss these issues. Design/methodology/approach – Research indicates that there are five key elements that influence the success of a performance metric. A standardized method of determining whether or not a metric has the right mix of these elements was created in the form of a metrics scorecard. Findings – The scorecard survey was successful in revealing good metric use, as well as problematic metrics. In the quality department, the Document Rejects metric has been reworked and is no longer within the executive's metric deck. It was also recommended to add root cause analysis, and to quantify and track the cost of non-conformance and the overall cost of quality. In total, the number of site wide metrics has decreased from 75 to 50 metrics. The 50 remaining metrics are undergoing a continuous improvement process in conjunction with the use of the metric scorecard tool developed in this research. Research limitations/implications – The metrics scorecard should be used site-wide for an assessment of all metrics. The focus of this paper is on the metrics within the quality department. Practical implications – Putting a quick and efficient metrics assessment technique in place was critical. With the leadership and participation of Lockheed Martin, this goal was accomplished. Originality/value – This paper presents the process of metrics evaluation and the issues that were encountered during the process, including insights that would not have been easily documented without this mechanism. Lockheed Martin Company has used results from this research. Other industries could also apply the methods proposed here.


2016 ◽  
Vol 34 (6) ◽  
pp. 641-654 ◽  
Author(s):  
David Jansen van Vuuren

Purpose The purpose of this paper is twofold: primary, to argue that the profits method, specifically a discounted cash flow (DCF)-based profits method, should be the preferred method of valuation when valuing specialised property. Secondary, to make technical recommendations in the application of the method. Design/methodology/approach Literature was reviewed on the theory of the profits method as well as physical valuations performed in practice. Improvements for the profits method are suggested from the review of six valuations conducted in South Africa in the specialised property sectors. A qualitative approach is followed in the research as broad principles are extracted from the valuation reports as implications and improvements for the profits method. Findings The profits method is more flexible and sophisticated than the cost approach in taking into account systematic and unsystematic risk. The profits method is more accurate than the cost approach in delivering a true reflection of the value of specialised property for any purpose but specifically for mortgage lending purposes and reduces the credit exposure risk of financial institutions. It also decreases pricing inefficiencies to be exploited by buyers and sellers. Practical implications Three improvements to the profits method are suggested. First, revenue could be forecasted based on a probability-weighted approach. Second, a modified capitalisation rate is suggested to the capitalisation rate formula in the calculation of G. Third, a market rental aggregation anchoring and judgement-based approach is suggested as rationale for determining the hypothetical rental split. Originality/value There seems to be a general lack in literature on the profits method of valuation and its application to specialised properties, specifically a DCF-based approach, with this paper being a technical contribution to the body of knowledge on this topic.


TAPPI Journal ◽  
2015 ◽  
Vol 14 (6) ◽  
pp. 395-402
Author(s):  
FLÁVIO MARCELO CORREIA ◽  
JOSÉ VICENTE HALLAK D’ANGELO ◽  
SUELI APARECIDA MINGOTI

Alkali charge is one of the most relevant variables in the continuous kraft cooking process. The white liquor mass flow rate can be determined by analyzing the chip bulk density fed to the process. At the mills, the total time for this analysis usually is greater than the residence time in the digester. This can lead to an increasing error in the mass of white liquor added relative to the specified alkali charge. This paper proposes a new approach using the Box-Jenkins methodology to develop a dynamic model for predicting chip bulk density. Industrial data were gathered on 1948 observations over a period of 12 months from a Kamyr continuous digester at a bleached eucalyptus kraft pulp mill in Brazil. Autoregressive integrated moving average (ARIMA) models were evaluated according to different statistical decision criteria, leading to the choice of ARIMA (2,0,2) as the best forecasting model, which was validated against a new dataset gathered during 2 months of operations. A combination of predictors has shown more accurate results compared to those obtained by laboratory analysis, allowing a reduction of around 25% of the chip bulk density error to the alkali addition amount.


Author(s):  
H.J. Ryu ◽  
A.B. Shah ◽  
Y. Wang ◽  
W.-H. Chuang ◽  
T. Tong

Abstract When failure analysis is performed on a circuit composed of FinFETs, the degree of defect isolation, in some cases, requires isolation to the fin level inside the problematic FinFET for complete understanding of root cause. This work shows successful application of electron beam alteration of current flow combined with nanoprobing for precise isolation of a defect down to fin level. To understand the mechanism of the leakage, transmission electron microscopy (TEM) slice was made along the leaky drain contact (perpendicular to fin direction) by focused ion beam thinning and lift-out. TEM image shows contact and fin. Stacking fault was found in the body of the silicon fin highlighted by the technique described in this paper.


Author(s):  
J.S. McMurray ◽  
C.M. Molella

Abstract Root cause for failure of 90 nm body contacted nFETs was identified using scanning capacitance microscopy (SCM) and scanning spreading resistance microscopy (SSRM). The failure mechanism was identified using both cross sectional imaging and imaging of the active silicon - buried oxide (BOX) interface in plan view. This is the first report of back-side plan view SCM and SSRM data for SOI devices. This unique plan view shows the root cause for the failure is an under doped link up region between the body contacts and the active channel of the device.


Author(s):  
Michael Woo ◽  
Marcos Campos ◽  
Luigi Aranda

Abstract A component failure has the potential to significantly impact the cost, manufacturing schedule, and/or the perceived reliability of a system, especially if the root cause of the failure is not known. A failure analysis is often key to mitigating the effects of a componentlevel failure to a customer or a system; minimizing schedule slips, minimizing related accrued costs to the customer, and allowing for the completion of the system with confidence that the reliability of the product had not been compromised. This case study will show how a detailed and systemic failure analysis was able to determine the exact cause of failure of a multiplexer in a high-reliability system, which allowed the manufacturer to confidently proceed with production knowing that the failure was not a systemic issue, but rather that it was a random “one time” event.


Author(s):  
Ian Kearney ◽  
Stephen Brink

Abstract The shift in power conversion and power management applications to thick copper clip technologies and thinner silicon dies enable high-current connections (overcoming limitations of common wire bond) and enhance the heat dissipation properties of System-in-Package solutions. Powerstage innovation integrates enhanced gate drivers with two MOSFETs combining vertical current flow with a lateral power MOSFET. It provides a low on-resistance and requires an extremely low gate charge with industry-standard package outlines - a combination not previously possible with existing silicon platforms. These advancements in both silicon and 3D Multi-Chip- Module packaging complexity present multifaceted challenges to the failure analyst. The various height levels and assembly interfaces can be difficult to deprocess while maintaining all the critical evidence. Further complicating failure isolation within the system is the integration of multiple chips, which can lead to false positives. Most importantly, the discrete MOSFET all too often gets overlooked as just a simple threeterminal device leading to incorrect deductions in determining true root cause. This paper presents the discrete power MOSFET perspective amidst the competing forces of the system-to-board-level failure analysis. It underlines the requirement for diligent analysis at every step and the importance as an analyst to contest the conflicting assumptions of challenging customers. Automatic Test Equipment (ATE) data-logs reported elevated power MOSFET leakage. Initial assumptions believed a MOSFET silicon process issue existed. Through methodical anamnesis and systematic analysis, the true failure was correctly isolated and the power MOSFET vindicated. The authors emphasize the importance of investigating all available evidence, from a macro to micro 3D package perspective, to achieve the bona fide path forward and true root cause.


Sign in / Sign up

Export Citation Format

Share Document