Journal of Defense Analytics and Logistics
Latest Publications


TOTAL DOCUMENTS

53
(FIVE YEARS 31)

H-INDEX

2
(FIVE YEARS 1)

Published By Emerald (Mcb Up )

2399-6439

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jade F. Preston ◽  
Bruce A. Cox ◽  
Paul P. Rebeiz ◽  
Timothy W. Breitbach

PurposeSupply chains need to balance competing objectives; in addition to efficiency, supply chains need to be resilient to adversarial and environmental interference and robust to uncertainties in long-term demand. Significant research has been conducted designing efficient supply chains and recent research has focused on resilient supply chain design. However, the integration of resilient and robust supply chain design is less well studied. The purpose of the paper is to include resilience and robustness into supply chain design.Design/methodology/approachThe paper develops a method to include resilience and robustness into supply chain design. Using the region of West Africa, which is plagued with persisting logistical issues, the authors develop a regional risk assessment framework and then apply categorical risk to the countries of West Africa using publicly available data. A scenario reduction technique is used to focus on the highest risk scenarios for the model to be tractable. Next, the authors develop a mathematical model leveraging this framework to design a resilient supply network that minimizes cost while ensuring the network functions following a disruption. Finally, the authors examine the network's robustness to demand uncertainty via several plausible emergency scenarios.FindingsThe authors provide optimal sets of transshipment hubs with varying counts from 5 through 15 hubs. The authors determine there is no feasible solution that uses only five transshipment hubs. The authors' findings reinforce those seven transshipment hubs – the solution currently employed in West Africa – is the cheapest architecture to achieve resilience and robustness. Additionally, for each set of feasibility transshipment hubs, the authors provide connections between hubs and demand spokes.Originality/valueWhile, at the time of this research, three other manuscripts incorporated both resilience and robustness of the authors' research unique solved the problem as a network flow instead of as a set covering problem. Additionally, the authors establish a novel risk framework to guide the required amount of redundancy, and finally the out research proposes a scenario reduction heuristic to allow tractable exploration of 512 possible demand scenarios.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
John T. Hanley

PurposeThe purpose of this paper is to illustrate how game theoretic solution concepts inform what classes of problems will be amenable to artificial intelligence and machine learning (AI/ML), and how to evolve the interaction between human and artificial intelligence.Design/methodology/approachThe approach addresses the development of operational gaming to support planning and decision making. It then provides a succinct summary of game theory for those designing and using games, with an emphasis on information conditions and solution concepts. It addresses how experimentation demonstrates where human decisions differ from game theoretic solution concepts and how games have been used to develop AI/ML. It concludes by suggesting what classes of problems will be amenable to AI/ML, and which will not. It goes on to propose a method for evolving human/artificial intelligence.FindingsGame theoretic solution concepts inform classes of problems where AI/ML 'solutions' will be suspect. The complexity of the subject requires a campaign of learning.Originality/valueThough games have been essential to the development of AI/ML, practitioners have yet to employ game theory to understand its limitations.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Thomas R. O'Neal ◽  
John M. Dickens ◽  
Lance E. Champagne ◽  
Aaron V. Glassburner ◽  
Jason R. Anderson ◽  
...  

PurposeForecasting techniques improve supply chain resilience by ensuring that the correct parts are available when required. In addition, accurate forecasts conserve precious resources and money by avoiding new start contracts to produce unforeseen part requests, reducing labor intensive cannibalization actions and ensuring consistent transportation modality streams where changes incur cost. This study explores the effectiveness of the United States Air Force’s current flying hour-based demand forecast by comparing it with a sortie-based demand forecast to predict future spare part needs.Design/methodology/approachThis study employs a correlation analysis to show that demand for reparable parts on certain aircraft has a stronger correlation to the number of sorties flown than the number of flying hours. The effect of using the number of sorties flown instead of flying hours is analyzed by employing sorties in the United States Air Force (USAF)’s current reparable parts forecasting model. A comparative analysis on D200 forecasting error is conducted across F-16 and B-52 fleets.FindingsThis study finds that the USAF could improve its reparable parts forecast, and subsequently part availability, by employing a sortie-based demand rate for particular aircraft such as the F-16. Additionally, our findings indicate that forecasts for reparable parts on aircraft with low sortie count flying profiles, such as the B-52 fleet, perform better modeling demand as a function of flying hours. Thus, evidence is provided that the Air Force should employ multiple forecasting techniques across its possessed, organically supported aircraft fleets. The improvement of the forecast and subsequent decrease in forecast error will be presented in the Results and Discussion section.Research limitations/implicationsThis study is limited by the data-collection environment, which is only reported on an annual basis and is limited to 14 years of historical data. Furthermore, some observations were not included because significant data entry errors resulted in unusable observations.Originality/valueThere are few studies addressing the time measure of USAF reparable component failures. To the best of the authors’ knowledge, there are no studies that analyze spare component demand as a function of sortie numbers and compare the results of forecasts made on a sortie-based demand signal to the current flying hour-based approach to spare parts forecasting. The sortie-based forecast is a novel methodology and is shown to outperform the current flying hour-based method for some aircraft fleets.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Phuc Bao Uyen Nguyen

PurposeThe purpose is to develop search and detection strategies that maximize the probability of detection of mine-like objects.Design/methodology/approachThe author have developed a methodology that incorporates variational calculus, number theory and algebra to derive a globally optimal strategy that maximizes the expected probability of detection.FindingsThe author found a set of look angles that globally maximize the probability of detection for a general class of mirror symmetric targets.Research limitations/implicationsThe optimal strategies only maximize the probability of detection and not the probability of identification.Practical implicationsIn the context of a search and detection operation, there is only a limited time to find the target before life is lost; hence, improving the chance of detection will in real terms be translated into the difference between success or failure, life or death. This rich field of study can be applied to mine countermeasure operations to make sure that the areas of operations are free of mines so that naval operations can be conducted safely.Originality/valueThere are two novel elements in this paper. First, the author determine the set of globally optimal look angles that maximize the probability of detection. Second, the author introduce the phenomenon of concordance between sensor images.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kyle C. McDermott ◽  
Ryan D. Winz ◽  
Thom J. Hodgson ◽  
Michael G. Kay ◽  
Russell E. King ◽  
...  

PurposeThe study aims to investigate the impact of additive manufacturing (AM) on the performance of a spare parts supply chain with a particular focus on underlying spare part demand patterns.Design/methodology/approachThis work evaluates various AM-enabled supply chain configurations through Monte Carlo simulation. Historical demand simulation and intermittent demand forecasting are used in conjunction with a mixed integer linear program to determine optimal network nodal inventory policies. By varying demand characteristics and AM capacity this work assesses how to best employ AM capability within the network.FindingsThis research assesses the preferred AM-enabled supply chain configuration for varying levels of intermittent demand patterns and AM production capacity. The research shows that variation in demand patterns alone directly affects the preferred network configuration. The relationship between the demand volume and relative AM production capacity affects the regions of superior network configuration performance.Research limitations/implicationsThis research makes several simplifying assumptions regarding AM technical capabilities. AM production time is assumed to be deterministic and does not consider build failure probability, build chamber capacity, part size, part complexity and post-processing requirements.Originality/valueThis research is the first study to link realistic spare part demand characterization to AM supply chain design using quantitative modeling.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Joe Garcia ◽  
Russell Shannon ◽  
Aaron Jacobson ◽  
William Mosca ◽  
Michael Burger ◽  
...  

Purpose This paper aims to describe an effort to provide for a robust and secure software development paradigm intended to support DevSecOps in a naval aviation enterprise (NAE) software support activity (SSA), with said paradigm supporting strong traceability and provability concerning the SSA’s output product, known as an operational flight program (OFP). Through a secure development environment (SDE), each critical software development function performed on said OFP during its development has a corresponding record represented on a blockchain. Design/methodology/approach An SDE is implemented as a virtual machine or container incorporating software development tools that are modified to support blockchain transactions. Each critical software development function, e.g. editing, compiling, linking, generates a blockchain transaction message with associated information embedded in the output of a said function that, together, can be used to prove integrity and support traceability. An attestation process is used to provide proof that the toolchain containing SDE is not subject to unauthorized modification at the time said critical function is performed. Findings Blockchain methods are shown to be a viable approach for supporting exhaustive traceability and strong provability of development system integrity for mission-critical software produced by an NAE SSA for NAE embedded systems software. Practical implications A blockchain-based authentication approach that could be implemented at the OFP point-of-load would provide for fine-grain authentication of all OFP software components, with each component or module having its own proof-of-integrity (including the integrity of the used development tools) over its entire development history. Originality/value Many SSAs have established control procedures for development such as check-out/check-in. This does not prove the SSA output software is secure. For one thing, a build system does not necessarily enforce procedures in a way that is determinable from the output. Furthermore, the SSA toolchain itself could be attacked. The approach described in this paper enforces security policy and embeds information into the output of every development function that can be cross-referenced to blockchain transaction records for provability and traceability that only trusted tools, free from unauthorized modifications, are used in software development. A key original concept of this approach is that it treats assigned developer time as a transferable digital currency.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yousaf Ali ◽  
Khaqan Zeb ◽  
Abdul Haseeb Khan Babar ◽  
Muhammad Asees Awan

Purpose The purpose of this research is to identify major barriers to the implementation of reverse logistics (RL). Also, the study addresses best practices among reuse, remanufacture, recycling, refurbishment and repair as alternatives for RL processes. Design/methodology/approach This study targets supply chain management experts for their opinions regarding the identification of critical barriers and alternatives for RL implementation. Their opinions were extracted through a Web questionnaire based on 14 criteria with 5 alternatives. The tools of multi-criteria decision-making are used for analysis, i.e. fuzzy VIKOR and fuzzy TOPSIS. Findings The results indicate that lack of recognition of competitive advantage to be gained through RL practice is the most critical barrier to RL implementation. The least barrier or major facilitator for RL is “supportive initiative for end-of-life products.” The top-ranked alternative in this study is reuse followed by remanufacturing. The least important alternative is “repair” in the case of Pakistan. These alternatives are ranked based on “q values” derived through fuzzy VIKOR. Research limitations/implications The results of this study can only be generalized for the manufacturing sector of Pakistan during the period of the study. Practical implications The findings of this study will assist managers in deploying the best practices concerning RL. Originality/value Fuzzy VIKOR and fuzzy TOPSIS have not been applied to RL alternatives in previous research.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Albert Vasso ◽  
Richard Cobb ◽  
John Colombi ◽  
Bryan Little ◽  
David Meyer

Purpose The US Government is challenged to maintain pace as the world’s de facto provider of space object cataloging data. Augmenting capabilities with nontraditional sensors present an expeditious and low-cost improvement. However, the large tradespace and unexplored system of systems performance requirements pose a challenge to successful capitalization. This paper aims to better define and assess the utility of augmentation via a multi-disiplinary study. Design/methodology/approach Hypothetical telescope architectures are modeled and simulated on two separate days, then evaluated against performance measures and constraints using multi-objective optimization in a heuristic algorithm. Decision analysis and Pareto optimality identifies a set of high-performing architectures while preserving decision-maker design flexibility. Findings Capacity, coverage and maximum time unobserved are recommended as key performance measures. A total of 187 out of 1017 architectures were identified as top performers. A total of 29% of the sensors considered are found in over 80% of the top architectures. Additional considerations further reduce the tradespace to 19 best choices which collect an average of 49–51 observations per space object with a 595–630 min average maximum time unobserved, providing redundant coverage of the Geosynchronous Orbit belt. This represents a three-fold increase in capacity and coverage and a 2 h (16%) decrease in the maximum time unobserved compared to the baseline government-only architecture as-modeled. Originality/value This study validates the utility of an augmented network concept using a physics-based model and modern analytical techniques. It objectively responds to policy mandating cataloging improvements without relying solely on expert-derived point solutions.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zachary Hornberger ◽  
Bruce Cox ◽  
Raymond R. Hill

Purpose Large/stochastic spatiotemporal demand data sets can prove intractable for location optimization problems, motivating the need for aggregation. However, demand aggregation induces errors. Significant theoretical research has been performed related to the modifiable areal unit problem and the zone definition problem. Minimal research has been accomplished related to the specific issues inherent to spatiotemporal demand data, such as search and rescue (SAR) data. This study provides a quantitative comparison of various aggregation methodologies and their relation to distance and volume based aggregation errors. Design/methodology/approach This paper introduces and applies a framework for comparing both deterministic and stochastic aggregation methods using distance- and volume-based aggregation error metrics. This paper additionally applies weighted versions of these metrics to account for the reality that demand events are nonhomogeneous. These metrics are applied to a large, highly variable, spatiotemporal demand data set of SAR events in the Pacific Ocean. Comparisons using these metrics are conducted between six quadrat aggregations of varying scales and two zonal distribution models using hierarchical clustering. Findings As quadrat fidelity increases the distance-based aggregation error decreases, while the two deliberate zonal approaches further reduce this error while using fewer zones. However, the higher fidelity aggregations detrimentally affect volume error. Additionally, by splitting the SAR data set into training and test sets this paper shows the stochastic zonal distribution aggregation method is effective at simulating actual future demands. Originality/value This study indicates no singular best aggregation method exists, by quantifying trade-offs in aggregation-induced errors practitioners can utilize the method that minimizes errors most relevant to their study. Study also quantifies the ability of a stochastic zonal distribution method to effectively simulate future demand data.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Geoff A.M. Loveman ◽  
Joel J.E. Edney

Purpose The purpose of the present study was the development of a methodology for translating predicted rates of decompression sickness (DCS), following tower escape from a sunken submarine, into predicted probability of survival, a more useful statistic for making operational decisions. Design/methodology/approach Predictions were made, using existing models, for the probabilities of a range of DCS symptoms following submarine tower escape. Subject matter expert estimates of the effect of these symptoms on a submariner’s ability to survive in benign weather conditions on the sea surface until rescued were combined with the likelihoods of the different symptoms occurring using standard probability theory. Plots were generated showing the dependence of predicted probability of survival following escape on the escape depth and the pressure within the stricken submarine. Findings Current advice on whether to attempt tower escape is based on avoiding rates of DCS above approximately 5%–10%. Consideration of predicted survival rates, based on subject matter expert opinion, suggests that the current advice might be considered as conservative in the distressed submarine scenario, as DCS rates of 10% are not anticipated to markedly affect survival rates. Originality/value According to the authors’ knowledge, this study represents the first attempt to quantify the effect of different DCS symptoms on the probability of survival in submarine tower escape.


Sign in / Sign up

Export Citation Format

Share Document