Cybersecurity economics – balancing operational security spending

2019 ◽  
Vol 32 (5) ◽  
pp. 1318-1342
Author(s):  
Stale Ekelund ◽  
Zilia Iskoujina

Purpose The purpose of this paper is to demonstrate how to find the optimal investment level in protecting an organisation’s assets. Design/methodology/approach This study integrates a case study of an international financial organisation with various methods and theories in security economics and mathematics, such as value-at-risk (VaR), Monte Carlo simulation, exponential and Poisson probability distributions. Thereby it combines theory and empirical findings to establish a new approach to determining optimal security investment levels. Findings The results indicate that optimal security investment levels can be found through computer simulation with historical incident data to find VaR. By combining various scenarios, the convex graph of the risk cost function has been plotted, where the minimum of the graph represents the optimal invest level for an asset. Research limitations/implications The limitations of the research include a modest number of loss observations from one case study, and the use of normal probability distribution. The approach has limitations where there are no historical data available or the data has zero losses. These areas should undergo further research including larger data set of losses and exploring other probability distributions. Practical implications The results can be used by leading business practitioners to assist them with decision making on investment to the increased protection of an asset. Originality/value The originality of this research is in its new way of combining theories with historical data to create methods to measure theoretical and empirical strength of a control (or set of controls) and translating it to loss probabilities and loss sizes.

2014 ◽  
Vol 21 (1) ◽  
pp. 111-126 ◽  
Author(s):  
Palaneeswaran Ekambaram ◽  
Peter E.D. Love ◽  
Mohan M. Kumaraswamy ◽  
Thomas S.T. Ng

Purpose – Rework is an endemic problem in construction projects and has been identified as being a significant factor contributing cost and schedule overruns. Causal ascription is necessary to obtain knowledge about the underlying nature of rework so that appropriate prevention mechanisms can be put in place. The paper aims to discuss these issues. Design/methodology/approach – Using a supervised questionnaire survey and case-study interviews, data from 112 building and engineering projects about the sources and causes of rework in projects were obtained. A multivariate exploration was conducted to examine the underlying relationships between rework variables. Findings – The analysis revealed that there was a significant difference between rework causes for building and civil engineering projects. The set of associations explored in the analyses will be useful to develop a generic causal model to examine the quantitative impact of rework on project performance so that appropriate prevention strategies can be identified and developed. Research limitations/implications – The limitations include: small data set (112 projects), which include 75 from building and 37 from civil engineering projects. Practical implications – Meaningful insights into the rework occurrences in construction projects will pave pathways for rational mitigation and effective management measures. Originality/value – To date there has been limited empirical research that has sought to determine the causal ascription of rework, particularly in Hong Kong.


2019 ◽  
Vol 32 (5) ◽  
pp. 807-823 ◽  
Author(s):  
Wu He ◽  
Xin Tian ◽  
Feng-Kwei Wang

Purpose Few academic studies specifically investigate how businesses can use social media to innovate customer loyalty programs. The purpose of this paper is to present an in-depth case study of the Shop Your Way (SYW) program, which is regarded as one of the most successful customer loyalty programs with social media. Design/methodology/approach This paper uses case study research as the methodology to uncover innovative features associated with the SYW customer loyalty program. The authors collected the data from SYW’s social media forums and tweets. The data set was analyzed using social media analytics tools including the R package and Lexicon. Findings Based on the research results, the authors summarize innovative social media features identified from SYW. The authors also provide insights and recommendations for businesses that are seeking to innovate their customer loyalty programs using social media technologies. Originality/value The results of this case study set a good example for businesses which want to innovate and improve their customer loyalty programs using social media technologies. This is the first in-depth case study on the SYW program, one of the most successful customer loyalty programs with social media. The results shed light on how social media can innovate customer loyalty programs in both theory and practice.


Radiocarbon ◽  
2012 ◽  
Vol 54 (02) ◽  
pp. 239-265 ◽  
Author(s):  
Robert Z Selden

The East Texas Radiocarbon Database contributes to an analysis of tempo and place for Woodland era (∼500 BC–AD 800) archaeological sites within the region. The temporal and spatial distributions of calibrated14C ages (n= 127) with a standard deviation (ΔT) of 61 from archaeological sites with Woodland components (n= 51) are useful in exploring the development and geographical continuity of the peoples in cast Texas, and lead to a refinement of our current chronological understanding of the period. While analysis of summed probability distributions (SPDs) produces less than significant findings due to sample size, they are used here to illustrate the method of date combination prior to the production of site- and period-specific SPDs. Through the incorporation of this method, the number of14C dates is reduced to 85 with a ΔTof 54. The resultant data set is then subjected to statistical analyses that conclude with the separation of the east Texas Woodland period into the Early Woodland (∼500 BC–AD 0), Middle Woodland (∼AD 0–400), and Late Woodland (∼AD 400–800) periods.


2014 ◽  
Vol 10 (4) ◽  
pp. 394-412 ◽  
Author(s):  
Mai Miyabe ◽  
Akiyo Nadamoto ◽  
Eiji Aramaki

Purpose – This aim of this paper is to elucidate rumor propagation on microblogs and to assess a system for collecting rumor information to prevent rumor-spreading. Design/methodology/approach – We present a case study of how rumors spread on Twitter during a recent disaster situation, the Great East Japan earthquake of March 11, 2011, based on comparison to a normal situation. We specifically examine rumor disaffirmation because automatic rumor extraction is difficult. Extracting rumor-disaffirmation is easier than extracting the rumors themselves. We classify tweets in disaster situations, analyze tweets in disaster situations based on users' impressions and compare the spread of rumor tweets in a disaster situation to that in a normal situation. Findings – The analysis results showed the following characteristics of rumors in a disaster situation. The information transmission is 74.9 per cent, representing the greatest number of tweets in our data set. Rumor tweets give users strong behavioral facilitation, make them feel negative and foment disorder. Rumors of a normal situation spread through many hierarchies but the rumors of disaster situations are two or three hierarchies, which means that the rumor spreading style differs in disaster situations and in normal situations. Originality/value – The originality of this paper is to target rumors on Twitter and to analyze rumor characteristics by multiple aspects using not only rumor-tweets but also disaffirmation-tweets as an investigation object.


Author(s):  
Nick Kelly ◽  
Maximiliano Montenegro ◽  
Carlos Gonzalez ◽  
Paula Clasing ◽  
Augusto Sandoval ◽  
...  

Purpose The purpose of this paper is to demonstrate the utility of combining event-centred and variable-centred approaches when analysing big data for higher education institutions. It uses a large, university-wide data set to demonstrate the methodology for this analysis by using the case study method. It presents empirical findings about relationships between student behaviours in a learning management system (LMS) and the learning outcomes of students, and further explores these findings using process modelling techniques. Design/methodology/approach The paper describes a two-year study in a Chilean university, using big data from a LMS and from the central university database of student results and demographics. Descriptive statistics of LMS use in different years presents an overall picture of student use of the system. Process mining is described as an event-centred approach to give a deeper level of understanding of these findings. Findings The study found evidence to support the idea that instructors do not strongly influence student use of an LMS. It replicates existing studies to show that higher-performing students use an LMS differently from the lower-performing students. It shows the value of combining variable- and event-centred approaches to learning analytics. Research limitations/implications The study is limited by its institutional context, its two-year time frame and by its exploratory mode of investigation to create a case study. Practical implications The paper is useful for institutions in developing a methodology for using big data from a LMS to make use of event-centred approaches. Originality/value The paper is valuable in replicating and extending recent studies using event-centred approaches to analysis of learning data. The study here is on a larger scale than the existing studies (using a university-wide data set), in a novel context (Latin America), that provides a clear description for how and why the methodology should inform institutional approaches.


2015 ◽  
Vol 22 (4) ◽  
pp. 624-642 ◽  
Author(s):  
Subhadip Sarkar

Purpose – Identification of the best school among other competitors is done using a new technique called most productive scale size based data envelopment analysis (DEA). The paper aims to discuss this issue. Design/methodology/approach – A non-central principal component analysis is used here to create a new plane according to the constant return to scale. This plane contains only ultimate performers. Findings – The new method has a complete discord with the results of CCR DEA. However, after incorporating the ultimate performers in the original data set this difference was eliminated. Practical implications – The proposed frontier provides a way to identify those DMUs which follow cost strategy proposed by Porter. Originality/value – A case study of six schools is incorporated here to identify the superior school and also to visualize gaps in their performances.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Carlo Amenta ◽  
Paolo Di Betta

PurposeThe article presents an empirical analysis that evaluates the effects of a systemic corruption scandal on the demand in the short and the long run. In 2006, the Calciopoli scandal uncovered the match rigging in the Italian soccer first division. The exemplary sportive sanction of relegating the primary culprit to the second division imposed further negative externalities on the other clubs. Should we prefer the sportive sanction on the team or the monetary fines for the club?Design/methodology/approachWe estimated two log-linear models of the demand side (stadium attendance) using a fixed effect estimator, on two panel data set made of all the Italian soccer clubs in the first and second division (Serie A and Serie B) for the seasons 2004/2005 to 2009/2010, considering the relegation of the Juventus as the event which impacted the demand for soccer.FindingsRelegating Juventus to Serie B caused an immediate decrease of 18.4% in the attendance for all the teams, both in Serie A and in Serie B, for the three seasons considered, and 1% decrease when all the seasons are considered to measure the fallout of the scandal on the fans' disaffection.Originality/valueThe effect of corruption in sport on demand is an important issue, and there are few studies already published. As for sports economics and management, our results are of interest for sport-governing bodies – as a case study that can help in designing a more effective sanctioning system to prevent corruption episodes.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Tuomas Korhonen ◽  
Erno Selos ◽  
Teemu Laine ◽  
Petri Suomala

PurposeThe purpose of this paper is to better understand management accounting automation by exploring the programmability of management accounting work.Design/methodology/approachWe build upon the literature on digitalization in management accounting and draw upon the pragmatic constructivist methodology to understand how digitalization takes place at the individual actors' level in accounting practice. The paper uses a data set from an interventionist case study of a machinery manufacturer.FindingsWe examine an actual process of automating management accounting tasks. During this development process, surprisingly, calculation tasks remained more fit for humans than machines though, initially, they were thought to be programmable.Research limitations/implicationsAccording to our findings, practitioners may interpret experts' nonprogrammable work tasks as programmable and seek to automate them. Only identifying the factual possibilities for automating accounting-related work can lead to automation-improved efficiency. Our findings can be increasingly relevant for advanced analytics initiatives and applications within management accounting (e.g. robotic process automation, big data, machine learning and artificial intelligence).Practical implicationsPractitioners need to carefully analyze the entity they wish to automate and understand the factual possibilities of using and maintaining the planned automatic system throughout its life cycle.Originality/valueThe paper shows that when processes are assessed from a distance, the nonprogrammable management accounting tasks and expertise can become misinterpreted as programmable, and the goal of automating them has little chance of success. It also shows possibilities for human accountants to remain relevant in comparison to machines and paves the way for further studies on advanced decision technologies in management accounting.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xue Deng ◽  
Weimin Li

Purpose This paper aims to propose two portfolio selection models with hesitant value-at-risk (HVaR) – HVaR fuzzy portfolio selection model (HVaR-FPSM) and HVaR-score fuzzy portfolio selection model (HVaR-S-FPSM) – to help investors solve the problem that how bad a portfolio can be under probabilistic hesitant fuzzy environment. Design/methodology/approach It is strictly proved that the higher the probability threshold, the higher the HVaR in HVaR-S-FPSM. Numerical examples and a case study are used to illustrate the steps of building the proposed models and the importance of the HVaR and score constraint. In case study, the authors conduct a sensitivity analysis and compare the proposed models with decision-making models and hesitant fuzzy portfolio models. Findings The score constraint can make sure that the portfolio selected is profitable, but will not cause the HVaR to decrease dramatically. The investment proportions of stocks are mainly affected by their HVaRs, which is consistent with the fact that the stock having good performance is usually desirable in portfolio selection. The HVaR-S-FPSM can find portfolios with higher HVaR than each single stock and has little sacrifice of extreme returns. Originality/value This paper fulfills a need to construct portfolio selection models with HVaR under probabilistic hesitant fuzzy environment. As a downside risk, the HVaR is more consistent with investors’ intuitions about risks. Moreover, the score constraint makes sure that undesirable portfolios will not be selected.


2001 ◽  
Vol 6 (3) ◽  
pp. 213-217 ◽  
Author(s):  
B. S. Daya Sagar

A notable similarity is observed between the probability distributions obtained from a data set that contains a large number of randomly situated surface water bodies and the probability distributions estimated by binomial multiplicative process. From these well conformed probability distributions, the generalised information dimensions have been computed through f-αspectra to characterise and quantify the spatial organisation of the surface water bodies. It is noticed from the investigated case study that the results tend to vary by changing the direction of bisecting process. The experimental results on spatial distribution of surface water bodies of the study area qualify that the computed generalised information dimensions for the vertical bisecting is rather uniform than that of the horizontal bisecting process.


Sign in / Sign up

Export Citation Format

Share Document