scholarly journals Global Supply Chain Management At Printko Ink Company

2011 ◽  
Vol 6 (4) ◽  
Author(s):  
Raida Abuizam

The Printko Ink Company case illustrates how network models can be used as an aid in spreadsheet model formulation. It also enriches students’ knowledge how to use integer linear programming with binary (0-1) variables in dealing with fixed cost plant and warehouse location problems. Students completing the Printko Ink case will be able to develop a spreadsheet model that will solve for many logistic decision variables.  It will help students decide where or whether to manufacture Printko Ink single product and how to get it to its customers around the world in the most economical manner.

2021 ◽  
Vol 1 ◽  
pp. 1755-1764
Author(s):  
Rongyan Zhou ◽  
Julie Stal-Le Cardinal

Abstract Industry 4.0 is a great opportunity and a tremendous challenge for every role of society. Our study combines complex network and qualitative methods to analyze the Industry 4.0 macroeconomic issues and global supply chain, which enriches the qualitative analysis and machine learning in macroscopic and strategic research. Unsupervised complex graph network models are used to explore how industry 4.0 reshapes the world. Based on the in-degree and out-degree of the weighted and unweighted edges of each node, combined with the grouping results based on unsupervised learning, our study shows that the cooperation groups of Industry 4.0 are different from the previous traditional alliances. Macroeconomics issues also are studied. Finally, strong cohesive groups and recommendations for businessmen and policymakers are proposed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rohit Kundu ◽  
Hritam Basak ◽  
Pawan Kumar Singh ◽  
Ali Ahmadian ◽  
Massimiliano Ferrara ◽  
...  

AbstractCOVID-19 has crippled the world’s healthcare systems, setting back the economy and taking the lives of several people. Although potential vaccines are being tested and supplied around the world, it will take a long time to reach every human being, more so with new variants of the virus emerging, enforcing a lockdown-like situation on parts of the world. Thus, there is a dire need for early and accurate detection of COVID-19 to prevent the spread of the disease, even more. The current gold-standard RT-PCR test is only 71% sensitive and is a laborious test to perform, leading to the incapability of conducting the population-wide screening. To this end, in this paper, we propose an automated COVID-19 detection system that uses CT-scan images of the lungs for classifying the same into COVID and Non-COVID cases. The proposed method applies an ensemble strategy that generates fuzzy ranks of the base classification models using the Gompertz function and fuses the decision scores of the base models adaptively to make the final predictions on the test cases. Three transfer learning-based convolutional neural network models are used, namely VGG-11, Wide ResNet-50-2, and Inception v3, to generate the decision scores to be fused by the proposed ensemble model. The framework has been evaluated on two publicly available chest CT scan datasets achieving state-of-the-art performance, justifying the reliability of the model. The relevant source codes related to the present work is available in: GitHub.


2021 ◽  
Vol 4 ◽  
pp. 1-7
Author(s):  
Thierry Garlan ◽  
Isabelle Gabelotaud ◽  
Elodie Marchès ◽  
Edith Le Borgne ◽  
Sylvain Lucas

Abstract. A global seabed sediment map has been developed since 1995 to provide a necessary tool for different needs. This project is not completely original since it had already been done in 1912 when the French hydrographic Office and the University of Nancy produced sedimentary maps of the European and North American coasts. Seabed sediments is one of the last geographical domains which can’t benefit of satellite data. Without this contribution, sediment maps need to use very old data mixed with the new ones to be able to reach the goal of a global map. In general, sediment maps are made with the latest available techniques and are replaced after a few decades, thus generating new cartographic works as if all the previous efforts had become useless. Such approach underestimates the quality of past works and prevents to have maps covering large areas. The present work suggests to standardize all kind of sedimentary data from different periods and from very different acquisition systems and integrate them into a single product. This process has already been done for bathymetric data of marine charts, we discuss in this article of the application of this method at a global scale for sediment data.


Author(s):  
Vo Ngoc Phu ◽  
Vo Thi Ngoc Tran

Artificial intelligence (ARTINT) and information have been famous fields for many years. A reason has been that many different areas have been promoted quickly based on the ARTINT and information, and they have created many significant values for many years. These crucial values have certainly been used more and more for many economies of the countries in the world, other sciences, companies, organizations, etc. Many massive corporations, big organizations, etc. have been established rapidly because these economies have been developed in the strongest way. Unsurprisingly, lots of information and large-scale data sets have been created clearly from these corporations, organizations, etc. This has been the major challenges for many commercial applications, studies, etc. to process and store them successfully. To handle this problem, many algorithms have been proposed for processing these big data sets.


2015 ◽  
Vol 4 (1) ◽  
pp. 6-26 ◽  
Author(s):  
Valeria Andreoni ◽  
Apollonia Miola

Purpose – The increasing complexity of the present economic system and the strong interdependencies existing between production activities taking place in different world areas make modern societies vulnerable to crisis. The global supply chain is a paradigmatic example of economic structures on which the impacts of unexpected events propagate rapidly through the system. Climate change, which affects societies all over the world, is one of the most important factors influencing the efficiency of the present economic networks. During the last decades a large set of studies have been oriented to investigate the direct impacts generated on specific geographical areas or productions. However, a smaller number of analyses have been oriented to quantify the cascading and indirect economic effects generated all over the world. The paper aims to discuss these issues. Design/methodology/approach – The main objective of this paper is to provide an overview of the main studies, methodologies and databases used to investigate the climate vulnerability of the global supply chain. Findings – The great complexity of the global economic system, coupled with methodological and data gaps, makes it difficult to estimate the domino effects of unexpected events. A clear understanding of the possible consequences generated all over the world is, however, a fundamental step to build socio-economic resilience and to plan effective adaptation strategies. Originality/value – The information provided in this paper can be useful to support further studies, to build consistent quantification methodologies and to fill the possible data gap.


Systems ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 21 ◽  
Author(s):  
Edoardo Bertone ◽  
Martin Jason Luna Juncal ◽  
Rafaela Keiko Prado Umeno ◽  
Douglas Alves Peixoto ◽  
Khoi Nguyen ◽  
...  

Governments around the world have introduced a number of stringent policies to try to contain COVID-19 outbreaks, but the relative importance of such measures, in comparison to the community response to these restrictions, the amount of testing conducted, and the interconnections between them, is not well understood yet. In this study, data were collected from numerous online sources, pre-processed and analysed, and a number of Bayesian Network models were developed, in an attempt to unpack such complexity. Results show that early, high-volume testing was the most crucial factor in successfully monitoring and controlling the outbreaks; when testing was low, early government and community responses were found to be both critical in predicting how rapidly cases and deaths grew in the first weeks of the outbreak. Results also highlight that in countries with low early test numbers, the undiagnosed cases could have been up to five times higher than the officially diagnosed cases. The conducted analysis and developed models can be refined in the future with more data and variables, to understand/model potential second waves of contagions.


2018 ◽  
Vol 7 (4) ◽  
pp. 115-155
Author(s):  
Javad Nematian

Hubs are facilities to collect, arrange and distribute commodities in telecommunication networks, cargo delivery systems, etc. In this article, it will study two popular hub location problems (p-hub center and p-hub maximal covering problems) under uncertainty. First, novel reliable uncapacitated p-hub location problems are introduced based on considering the failure probability of hubs, in which the parameters are random fuzzy variables, but the decision variables are real variables. Then, the proposed hub location problems under uncertainty are solved by new methods using random fuzzy chance-constrained programming based on the idea of possibility theory. These methods can satisfy optimistic and pessimistic decision makers under uncertain framework. Finally, some benchmark problems are solved as numerical examples to clarify the described methods and show their efficiency.


2018 ◽  
Vol 30 (4) ◽  
pp. 343-356 ◽  
Author(s):  
Rajesh Kr Singh ◽  
Nikhil Chaudhary ◽  
Nikhil Saxena

Sign in / Sign up

Export Citation Format

Share Document