scholarly journals Analysis of the effects of spatiotemporal demand data aggregation methods on distance and volume errors

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zachary Hornberger ◽  
Bruce Cox ◽  
Raymond R. Hill

Purpose Large/stochastic spatiotemporal demand data sets can prove intractable for location optimization problems, motivating the need for aggregation. However, demand aggregation induces errors. Significant theoretical research has been performed related to the modifiable areal unit problem and the zone definition problem. Minimal research has been accomplished related to the specific issues inherent to spatiotemporal demand data, such as search and rescue (SAR) data. This study provides a quantitative comparison of various aggregation methodologies and their relation to distance and volume based aggregation errors. Design/methodology/approach This paper introduces and applies a framework for comparing both deterministic and stochastic aggregation methods using distance- and volume-based aggregation error metrics. This paper additionally applies weighted versions of these metrics to account for the reality that demand events are nonhomogeneous. These metrics are applied to a large, highly variable, spatiotemporal demand data set of SAR events in the Pacific Ocean. Comparisons using these metrics are conducted between six quadrat aggregations of varying scales and two zonal distribution models using hierarchical clustering. Findings As quadrat fidelity increases the distance-based aggregation error decreases, while the two deliberate zonal approaches further reduce this error while using fewer zones. However, the higher fidelity aggregations detrimentally affect volume error. Additionally, by splitting the SAR data set into training and test sets this paper shows the stochastic zonal distribution aggregation method is effective at simulating actual future demands. Originality/value This study indicates no singular best aggregation method exists, by quantifying trade-offs in aggregation-induced errors practitioners can utilize the method that minimizes errors most relevant to their study. Study also quantifies the ability of a stochastic zonal distribution method to effectively simulate future demand data.

2019 ◽  
Vol 37 (6/7) ◽  
pp. 1113-1124 ◽  
Author(s):  
Navneet Bhatt ◽  
Adarsh Anand ◽  
Deepti Aggrawal

Purpose The purpose of this paper is to provide a mathematical framework to optimally allocate resources required for the discovery of vulnerabilities pertaining to different severity risk levels. Design/methodology/approach Different sets of optimization problems have been formulated and using the concept of dynamic programming approach, sequence of recursive functions has been constructed for the optimal allocation of resources used for discovering vulnerabilities of different severity scores. Mozilla Thunderbird web browser data set has been considered for giving the empirical evaluation by working with vulnerabilities of different severities. Findings As per the impact associated with a vulnerability, critical and high severity level are required to be patched promptly, and hence, a larger amount of funds have to be allocated for vulnerability discovery. Nevertheless, a low or medium risk vulnerability might also get exploited and thereby their discovery is also crucial for higher severity vulnerabilities. The current framework provides a diversified allocation of funds as per the requirement of a software manager and also aims at improving the discovery of vulnerability significantly. Practical implications The finding of this research may enable software managers to adequately assign resources in managing the discovery of vulnerabilities. It may also help in acknowledging the funds required for various bug bounty programs to cater security reporters based on the potential number of vulnerabilities present in software. Originality/value Much of the attention has been focused on the vulnerability discovery modeling and the risk associated with the security flaws. But, as far as the authors’ knowledge is concern, there is no such study that incorporates optimal allocation of resources with respect to the vulnerabilities of different severity scores. Hence, the building block of this paper contributes to future research.


2014 ◽  
Vol 74 (1) ◽  
pp. 17-37 ◽  
Author(s):  
Yann de Mey ◽  
Frankwin van Winsen ◽  
Erwin Wauters ◽  
Mark Vancauteren ◽  
Ludwig Lauwers ◽  
...  

Purpose – The purpose of this paper is to present empirical evidence of risk balancing behavior by European farmers. More specifically, the authors investigate strategic adjustments in the level of financial risk (FR) in response to changes in the level of business risk (BR). Design/methodology/approach – The authors conducted a correlation relationship analysis and run several linear fixed effects regression models using the European Union (EU)-15 FADN panel data set for the period 1995-2008. Findings – Overall, the paper finds EU evidence of risk balancing. The correlation relationship analysis suggests that just over half of the farm observations are risk balancers whereas the other (smaller) half are not. The coefficient in our fixed effects regression suggests that a 1 percent increase in BR reduces FR by 0.043 percent and has a standard error so low that the existence of non-risk balancers is doubtful. The results reject evidence of strong-form risk balancing – inverse trade-offs between FR and BR keeping total risk (TR) constant – but cannot reject weak-form risk balancing – inverse trade-offs between FR and BR with some observed changes in TR. Furthermore, the extent of risk balancing behavior is found to differ between different European countries and across farm typologies. Practical implications – This study provides European policy makers a first insight into risk balancing behavior of EU farmers. When risk balancing occurs, BR-reducing agricultural policies induce strategic upwards leverage adjustments that unintentionally reestablish or even increase total farm-level risk. Originality/value – Making use of the large and unique FADN database, to the best of the authors knowledge, this study is the first that provides European (EU-15) evidence on risk balancing behavior, is conducted at an unprecedented large scale, and presents the first risk balancing evidence across countries and farming systems.


2018 ◽  
Vol 1 (2/3) ◽  
pp. 191-220 ◽  
Author(s):  
Matthias Strifler

Purpose This purpose of this paper to examine how profit sharing depends on the underlying profitability of firms. More precisely, motivated by theoretical research on fair wages and unionized labor markets, profit sharing is estimated for six different profitability categories: positive, increasing, positive and increasing, negative, decreasing and negative or decreasing. Design/methodology/approach The paper exploits a high-quality linked employer–employee data set covering the universe of Finnish workers and firms. Endogeneity of profitability and self-selection of firms in different profitability categories are accounted for by an instrumental variables approach. The panel-structure of the data is used to control for unobserved heterogeneity (spell and individual fixed effects). Findings Profits are shared if firms are profitable or become more profitable. The wage-profit elasticity varies between 0.03 and 0.13 in such firms. However, profits are not shared if firms make losses or become less profitable. There is no downward wage adjustment. Research limitations/implications Because of the instrumental variables approach the question of external validity arises. Further empirical research on profit sharing with an explicit focus on firm profitability is warranted. The results of the paper indicate a connection between rent sharing and wage rigidity, as suggested by union and fair wage theory. Originality/value This is the first paper to consistently estimate the extent of profit sharing depending on the underlying profitability of firms.


2014 ◽  
Vol 30 (6) ◽  
pp. 14-16

Purpose – This paper aims to review the latest management developments across the globe and pinpoint practical implications from cutting-edge research and case studies. Design/methodology/approach – This briefing is prepared by an independent writer who adds his own impartial comments and places the articles in context. Findings – This briefing examines the trade-offs that take place between product innovation performance and business performance. It leverages a data set of 99 medium-sized technology firms in Sweden and considers the variables that affect these trade-offs. The paper suggests that while product innovation performance is positively related to the sales of the firm, the links with profitability are rather less proven. Practical implications – The paper provides strategic insights and practical thinking that have influenced some of the world’s leading organizations. Originality/value – The briefing saves busy executives and researchers hours of reading time by selecting only the very best, most pertinent information and presenting it in a condensed and an easy-to-digest format.


2015 ◽  
Vol 30 (5) ◽  
pp. 626-636 ◽  
Author(s):  
Matthew Sarkees ◽  
Ryan Luchs

Purpose – The purpose of this paper is to explore the gap in the literature as well as investigate how the combination of internal marketing or innovation investments with new product introductions influences alliance type choices. Most research on marketing–innovation resource allocation decisions has focused on trade-offs in internal investments such as advertising versus research and development. Absent from this discussion is whether firms offset a weakness internally by reaching outside the boundaries of the firm through alliances. As a result, managers lack a clear understanding of the potential for complementarity using internal–external approaches to a market. Design/methodology/approach – This paper draws on the resource-based view of the firm, using a longitudinal secondary data set and a choice model. Findings – The authors find that firms that internally emphasize either marketing or innovation maintain the same approach externally with respect to alliance type choices. Thus, efforts to complement internal marketing (innovation) resource investments with innovation (marketing) alliances are not seen. However, the interaction of new product introductions with internal resource investments does result in a complementary firm approach. Originality/value – The authors bridge a gap in the resource investment literature by exploring how the internal decisions impact the external alliance choices. The authors draw on longitudinal data and show that the action of making the choice is important, as it impacts future resource decisions. The authors explore the interaction between new production introductions and internal firm investments on alliance type choice. Given that new product introductions are a key to longer-term firm success, examining these relationships enhances the managerial impact.


2020 ◽  
Vol 47 (3) ◽  
pp. 547-560 ◽  
Author(s):  
Darush Yazdanfar ◽  
Peter Öhman

PurposeThe purpose of this study is to empirically investigate determinants of financial distress among small and medium-sized enterprises (SMEs) during the global financial crisis and post-crisis periods.Design/methodology/approachSeveral statistical methods, including multiple binary logistic regression, were used to analyse a longitudinal cross-sectional panel data set of 3,865 Swedish SMEs operating in five industries over the 2008–2015 period.FindingsThe results suggest that financial distress is influenced by macroeconomic conditions (i.e. the global financial crisis) and, in particular, by various firm-specific characteristics (i.e. performance, financial leverage and financial distress in previous year). However, firm size and industry affiliation have no significant relationship with financial distress.Research limitationsDue to data availability, this study is limited to a sample of Swedish SMEs in five industries covering eight years. Further research could examine the generalizability of these findings by investigating other firms operating in other industries and other countries.Originality/valueThis study is the first to examine determinants of financial distress among SMEs operating in Sweden using data from a large-scale longitudinal cross-sectional database.


2017 ◽  
Vol 55 (4) ◽  
pp. 376-389 ◽  
Author(s):  
Alice Huguet ◽  
Caitlin C. Farrell ◽  
Julie A. Marsh

Purpose The use of data for instructional improvement is prevalent in today’s educational landscape, yet policies calling for data use may result in significant variation at the school level. The purpose of this paper is to focus on tools and routines as mechanisms of principal influence on data-use professional learning communities (PLCs). Design/methodology/approach Data were collected through a comparative case study of two low-income, low-performing schools in one district. The data set included interview and focus group transcripts, observation field notes and documents, and was iteratively coded. Findings The two principals in the study employed tools and routines differently to influence ways that teachers interacted with data in their PLCs. Teachers who were given leeway to co-construct data-use tools found them to be more beneficial to their work. Findings also suggest that teachers’ data use may benefit from more flexibility in their day-to-day PLC routines. Research limitations/implications Closer examination of how tools are designed and time is spent in data-use PLCs may help the authors further understand the influence of the principal’s role. Originality/value Previous research has demonstrated that data use can improve teacher instruction, yet the varied implementation of data-use PLCs in this district illustrates that not all students have an equal opportunity to learn from teachers who meaningfully engage with data.


2017 ◽  
Vol 37 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Haluk Ay ◽  
Anthony Luscher ◽  
Carolyn Sommerich

Purpose The purpose of this study is to design and develop a testing device to simulate interaction between human hand–arm dynamics, right-angle (RA) computer-controlled power torque tools and joint-tightening task-related variables. Design/methodology/approach The testing rig can simulate a variety of tools, tasks and operator conditions. The device includes custom data-acquisition electronics and graphical user interface-based software. The simulation of the human hand–arm dynamics is based on the rig’s four-bar mechanism-based design and mechanical components that provide adjustable stiffness (via pneumatic cylinder) and mass (via plates) and non-adjustable damping. The stiffness and mass values used are based on an experimentally validated hand–arm model that includes a database of model parameters. This database is with respect to gender and working posture, corresponding to experienced tool operators from a prior study. Findings The rig measures tool handle force and displacement responses simultaneously. Peak force and displacement coefficients of determination (R2) between rig estimations and human testing measurements were 0.98 and 0.85, respectively, for the same set of tools, tasks and operator conditions. The rig also provides predicted tool operator acceptability ratings, using a data set from a prior study of discomfort in experienced operators during torque tool use. Research limitations/implications Deviations from linearity may influence handle force and displacement measurements. Stiction (Coulomb friction) in the overall rig, as well as in the air cylinder piston, is neglected. The rig’s mechanical damping is not adjustable, despite the fact that human hand–arm damping varies with respect to gender and working posture. Deviations from these assumptions may affect the correlation of the handle force and displacement measurements with those of human testing for the same tool, task and operator conditions. Practical implications This test rig will allow the rapid assessment of the ergonomic performance of DC torque tools, saving considerable time in lineside applications and reducing the risk of worker injury. DC torque tools are an extremely effective way of increasing production rate and improving torque accuracy. Being a complex dynamic system, however, the performance of DC torque tools varies in each application. Changes in worker mass, damping and stiffness, as well as joint stiffness and tool program, make each application unique. This test rig models all of these factors and allows quick assessment. Social implications The use of this tool test rig will help to identify and understand risk factors that contribute to musculoskeletal disorders (MSDs) associated with the use of torque tools. Tool operators are subjected to large impulsive handle reaction forces, as joint torque builds up while tightening a fastener. Repeated exposure to such forces is associated with muscle soreness, fatigue and physical stress which are also risk factors for upper extremity injuries (MSDs; e.g. tendinosis, myofascial pain). Eccentric exercise exertions are known to cause damage to muscle tissue in untrained individuals and affect subsequent performance. Originality/value The rig provides a novel means for quantitative, repeatable dynamic evaluation of RA powered torque tools and objective selection of tightening programs. Compared to current static tool assessment methods, dynamic testing provides a more realistic tool assessment relative to the tool operator’s experience. This may lead to improvements in tool or controller design and reduction in associated musculoskeletal discomfort in operators.


Author(s):  
A Salman Avestimehr ◽  
Seyed Mohammadreza Mousavi Kalan ◽  
Mahdi Soltanolkotabi

Abstract Dealing with the shear size and complexity of today’s massive data sets requires computational platforms that can analyze data in a parallelized and distributed fashion. A major bottleneck that arises in such modern distributed computing environments is that some of the worker nodes may run slow. These nodes a.k.a. stragglers can significantly slow down computation as the slowest node may dictate the overall computational time. A recent computational framework, called encoded optimization, creates redundancy in the data to mitigate the effect of stragglers. In this paper, we develop novel mathematical understanding for this framework demonstrating its effectiveness in much broader settings than was previously understood. We also analyze the convergence behavior of iterative encoded optimization algorithms, allowing us to characterize fundamental trade-offs between convergence rate, size of data set, accuracy, computational load (or data redundancy) and straggler toleration in this framework.


2019 ◽  
Vol 36 (4) ◽  
pp. 569-586
Author(s):  
Ricardo Puziol Oliveira ◽  
Jorge Alberto Achcar

Purpose The purpose of this paper is to provide a new method to estimate the reliability of series system by using a discrete bivariate distribution. This problem is of great interest in industrial and engineering applications. Design/methodology/approach The authors considered the Basu–Dhar bivariate geometric distribution and a Bayesian approach with application to a simulated data set and an engineering data set. Findings From the obtained results of this study, the authors observe that the discrete Basu–Dhar bivariate probability distribution could be a good alternative in the analysis of series system structures with accurate inference results for the reliability of the system under a Bayesian approach. Originality/value System reliability studies usually assume independent lifetimes for the components (series, parallel or complex system structures) in the estimation of the reliability of the system. This assumption in general is not reasonable in many engineering applications, since it is possible that the presence of some dependence structure between the lifetimes of the components could affect the evaluation of the reliability of the system.


Sign in / Sign up

Export Citation Format

Share Document