Timely assessment of disaster and emergency response networks in the aftermath of superstorm Sandy, 2012

2018 ◽  
Vol 42 (7) ◽  
pp. 1010-1023 ◽  
Author(s):  
Jungwon Yeo ◽  
Louise Comfort ◽  
Kyujin Jung

PurposeThe purpose of this paper is to elaborate pros and cons of two coding methods: the rapid network assessment (RNA) and the manual content analysis (MCA). In particular, it focuses on the applicability of a new rapid data extraction and utilization method, which can contribute to the timely coordination of disaster and emergency response operations.Design/methodology/approachUtilizing the data set of textual information on the Superstorm Sandy response in 2012, retrieved from the LexisNexis Academic news archive, the two coding methods, MCA and RNA, are subjected to social network analysis.FindingsThe analysis results indicate a significant level of similarity between the data collected using these two methods. The findings indicate that the RNA method could be effectively used to extract megabytes of electronic data, characterize the emerging disaster response network and suggest timely policy implications for managers and practitioners during actual emergency response operations and coordination processes.Originality/valueConsidering the growing needs for the timely assessment of real-time disaster response systems and the emerging doubts regarding the effectiveness of the RNA method, this study contributes to uncovering the potential of the RNA method to extract relevant data from the megabytes of digitally available information. Also this research illustrates the applicability of MCA for assessing real-time disaster response networks by comparing network analysis results from data sets built by both the RNA and the MCA.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiawei Lian ◽  
Junhong He ◽  
Yun Niu ◽  
Tianze Wang

Purpose The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems. Design/methodology/approach On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects. Findings The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model. Originality/value This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.


2016 ◽  
Vol 17 (2) ◽  
pp. 203-210 ◽  
Author(s):  
Margie Jantti ◽  
Jennifer Heath

Purpose – The purpose of this paper is to provide an overview of the development of an institution wide approach to learning analytics at the University of Wollongong (UOW) and the inclusion of library data drawn from the Library Cube. Design/methodology/approach – The Student Support and Education Analytics team at UOW is tasked with creating policy, frameworks and infrastructure for the systematic capture, mapping and analysis of data from the across the university. The initial data set includes: log file data from Moodle sites, Library Cube, student administration data, tutorials and student support service usage data. Using the learning analytics data warehouse UOW is developing new models for analysis and visualisation with a focus on the provision of near real-time data to academic staff and students to optimise learning opportunities. Findings – The distinct advantage of the learning analytics model is that the selected data sets are updated weekly, enabling near real-time monitoring and intervention where required. Inclusion of library data with the other often disparate data sets from across the university has enabled development of a comprehensive platform for learning analytics. Future work will include the development of predictive models using the rapidly growing learning analytics data warehouse. Practical implications – Data warehousing infrastructure, the systematic capture and exporting of relevant library data sets are requisite for the consideration of library data in learning analytics. Originality/value – What was not anticipated five years ago when the Value Cube was first realised, was the development of learning analytic services at UOW. The Cube afforded University of Wollongong Library considerable advantage: the framework for data harvesting and analysis was established, ready for inclusion within learning analytics data sets and subsequent reporting to faculty.


2015 ◽  
Vol 17 (5) ◽  
pp. 719-732
Author(s):  
Dulakshi Santhusitha Kumari Karunasingha ◽  
Shie-Yui Liong

A simple clustering method is proposed for extracting representative subsets from lengthy data sets. The main purpose of the extracted subset of data is to use it to build prediction models (of the form of approximating functional relationships) instead of using the entire large data set. Such smaller subsets of data are often required in exploratory analysis stages of studies that involve resource consuming investigations. A few recent studies have used a subtractive clustering method (SCM) for such data extraction, in the absence of clustering methods for function approximation. SCM, however, requires several parameters to be specified. This study proposes a clustering method, which requires only a single parameter to be specified, yet it is shown to be as effective as the SCM. A method to find suitable values for the parameter is also proposed. Due to having only a single parameter, using the proposed clustering method is shown to be orders of magnitudes more efficient than using SCM. The effectiveness of the proposed method is demonstrated on phase space prediction of three univariate time series and prediction of two multivariate data sets. Some drawbacks of SCM when applied for data extraction are identified, and the proposed method is shown to be a solution for them.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajit Nair ◽  
Santosh Vishwakarma ◽  
Mukesh Soni ◽  
Tejas Patel ◽  
Shubham Joshi

Purpose The latest 2019 coronavirus (COVID-2019), which first appeared in December 2019 in Wuhan's city in China, rapidly spread around the world and became a pandemic. It has had a devastating impact on daily lives, the public's health and the global economy. The positive cases must be identified as soon as possible to avoid further dissemination of this disease and swift care of patients affected. The need for supportive diagnostic instruments increased, as no specific automated toolkits are available. The latest results from radiology imaging techniques indicate that these photos provide valuable details on the virus COVID-19. User advanced artificial intelligence (AI) technologies and radiological imagery can help diagnose this condition accurately and help resolve the lack of specialist doctors in isolated areas. In this research, a new paradigm for automatic detection of COVID-19 with bare chest X-ray images is displayed. Images are presented. The proposed model DarkCovidNet is designed to provide correct binary classification diagnostics (COVID vs no detection) and multi-class (COVID vs no results vs pneumonia) classification. The implemented model computed the average precision for the binary and multi-class classification of 98.46% and 91.352%, respectively, and an average accuracy of 98.97% and 87.868%. The DarkNet model was used in this research as a classifier for a real-time object detection method only once. A total of 17 convolutionary layers and different filters on each layer have been implemented. This platform can be used by the radiologists to verify their initial application screening and can also be used for screening patients through the cloud. Design/methodology/approach This study also uses the CNN-based model named Darknet-19 model, and this model will act as a platform for the real-time object detection system. The architecture of this system is designed in such a way that they can be able to detect real-time objects. This study has developed the DarkCovidNet model based on Darknet architecture with few layers and filters. So before discussing the DarkCovidNet model, look at the concept of Darknet architecture with their functionality. Typically, the DarkNet architecture consists of 5 pool layers though the max pool and 19 convolution layers. Assume as a convolution layer, and as a pooling layer. Findings The work discussed in this paper is used to diagnose the various radiology images and to develop a model that can accurately predict or classify the disease. The data set used in this work is the images bases on COVID-19 and non-COVID-19 taken from the various sources. The deep learning model named DarkCovidNet is applied to the data set, and these have shown signification performance in the case of binary classification and multi-class classification. During the multi-class classification, the model has shown an average accuracy 98.97% for the detection of COVID-19, whereas in a multi-class classification model has achieved an average accuracy of 87.868% during the classification of COVID-19, no detection and Pneumonia. Research limitations/implications One of the significant limitations of this work is that a limited number of chest X-ray images were used. It is observed that patients related to COVID-19 are increasing rapidly. In the future, the model on the larger data set which can be generated from the local hospitals will be implemented, and how the model is performing on the same will be checked. Originality/value Deep learning technology has made significant changes in the field of AI by generating good results, especially in pattern recognition. A conventional CNN structure includes a convolution layer that extracts characteristics from the input using the filters it applies, a pooling layer that reduces calculation efficiency and the neural network's completely connected layer. A CNN model is created by integrating one or more of these layers, and its internal parameters are modified to accomplish a specific mission, such as classification or object recognition. A typical CNN structure has a convolution layer that extracts features from the input with the filters it applies, a pooling layer to reduce the size for computational performance and a fully connected layer, which is a neural network. A CNN model is created by combining one or more such layers, and its internal parameters are adjusted to accomplish a particular task, such as classification or object recognition.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Wajid Shakeel Ahmed ◽  
Muhammad Sohaib ◽  
Jamal Maqsood ◽  
Ateeb Siddiqui

Purpose The purpose of this study is to determine if intraday week (IDW) effect of the currencies reflect leverage and asymmetric impact in currencies market. The study data set comprises of intraday patterns of 15 currencies from developed and emerging economies. Design methodology approach The study applies the exponential generalized autoregressive conditional heteroscedasticity (E-GARCH) model technique to observe the IDW leverage and asymmetric effect after introducing hourly dummies variables, namely, IDWmon, IDWwed, IDWfrid and IDWfrid-mon. Findings The study results favor the propositions and confirm that IDW effect do exist in the international forex markets in relation to hourly trading pattern for respective currencies. Mostly, currencies do depreciate on Monday and Wednesday compared to the rest of the days. However, on the last trading day, i.e. Friday currencies observe an appreciation pattern which is for both economies. The results have an evidence of leverage and asymmetric effect confirmed by the E-GARCH model as a result of press releases and influence by micro-factors in the currency markets. Practical implications The study believes to have theoretical connection related to the better understanding of currencies trend for developed and emerging economies, as the IDW effect exists. Moreover, confirmation of both the leverage and asymmetric effect in observed currencies would be able to assist the investors in making rational choices during the trading hours and would confirm considerable profits through profit incentivized strategies. Originality value The study not only add knowledge to the previous study work in relation to the hourly trading pattern of currencies with reference to the IDW effects but also highlights the leverage and asymmetric effect in currencies that will help in formulating future trading strategies particular to emerging economies.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Tressy Thomas ◽  
Enayat Rajabi

PurposeThe primary aim of this study is to review the studies from different dimensions including type of methods, experimentation setup and evaluation metrics used in the novel approaches proposed for data imputation, particularly in the machine learning (ML) area. This ultimately provides an understanding about how well the proposed framework is evaluated and what type and ratio of missingness are addressed in the proposals. The review questions in this study are (1) what are the ML-based imputation methods studied and proposed during 2010–2020? (2) How the experimentation setup, characteristics of data sets and missingness are employed in these studies? (3) What metrics were used for the evaluation of imputation method?Design/methodology/approachThe review process went through the standard identification, screening and selection process. The initial search on electronic databases for missing value imputation (MVI) based on ML algorithms returned a large number of papers totaling at 2,883. Most of the papers at this stage were not exactly an MVI technique relevant to this study. The literature reviews are first scanned in the title for relevancy, and 306 literature reviews were identified as appropriate. Upon reviewing the abstract text, 151 literature reviews that are not eligible for this study are dropped. This resulted in 155 research papers suitable for full-text review. From this, 117 papers are used in assessment of the review questions.FindingsThis study shows that clustering- and instance-based algorithms are the most proposed MVI methods. Percentage of correct prediction (PCP) and root mean square error (RMSE) are most used evaluation metrics in these studies. For experimentation, majority of the studies sourced the data sets from publicly available data set repositories. A common approach is that the complete data set is set as baseline to evaluate the effectiveness of imputation on the test data sets with artificially induced missingness. The data set size and missingness ratio varied across the experimentations, while missing datatype and mechanism are pertaining to the capability of imputation. Computational expense is a concern, and experimentation using large data sets appears to be a challenge.Originality/valueIt is understood from the review that there is no single universal solution to missing data problem. Variants of ML approaches work well with the missingness based on the characteristics of the data set. Most of the methods reviewed lack generalization with regard to applicability. Another concern related to applicability is the complexity of the formulation and implementation of the algorithm. Imputations based on k-nearest neighbors (kNN) and clustering algorithms which are simple and easy to implement make it popular across various domains.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lam Hoang Viet Le ◽  
Toan Luu Duc Huynh ◽  
Bryan S. Weber ◽  
Bao Khac Quoc Nguyen

PurposeThis paper aims to identify the disproportionate impacts of the COVID-19 pandemic on labor markets.Design/methodology/approachThe authors conduct a large-scale survey on 16,000 firms from 82 industries in Ho Chi Minh City, Vietnam, and analyze the data set by using different machine-learning methods.FindingsFirst, job loss and reduction in state-owned enterprises have been significantly larger than in other types of organizations. Second, employees of foreign direct investment enterprises suffer a significantly lower labor income than those of other groups. Third, the adverse effects of the COVID-19 pandemic on the labor market are heterogeneous across industries and geographies. Finally, firms with high revenue in 2019 are more likely to adopt preventive measures, including the reduction of labor forces. The authors also find a significant correlation between firms' revenue and labor reduction as traditional econometrics and machine-learning techniques suggest.Originality/valueThis study has two main policy implications. First, although government support through taxes has been provided, the authors highlight evidence that there may be some additional benefit from targeting firms that have characteristics associated with layoffs or other negative labor responses. Second, the authors provide information that shows which firm characteristics are associated with particular labor market responses such as layoffs, which may help target stimulus packages. Although the COVID-19 pandemic affects most industries and occupations, heterogeneous firm responses suggest that there could be several varieties of targeted policies-targeting firms that are likely to reduce labor forces or firms likely to face reduced revenue. In this paper, the authors outline several industries and firm characteristics which appear to more directly be reducing employee counts or having negative labor responses which may lead to more cost–effect stimulus.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Martinson Ankrah Twumasi ◽  
Yuansheng Jiang ◽  
Salina Adhikari ◽  
Caven Adu Gyamfi ◽  
Isaac Asare

PurposeThis paper aims to examine the determinants of rural dwellers financial literacy in Ghana.Design/methodology/approachA cross-sectional primary data set was used to estimate the factors influencing rural farm households' financial literacy using the IV-Tobit model.FindingsThe findings reveal that most rural residents are financially illiterate. The econometrics model results depicted that respondents' socioeconomic and demographic characteristics such as gender, income, age and education significantly affect financial literacy. Again, respondents who are risk seekers and listen or watch education programs are more likely to be financially literate.Research limitations/implicationsThe paper examined the determinants of rural dwellers financial literacy in four regions in Ghana. Future research should consider all or many regions for an informed generalization of findings.Practical implicationsThis paper provides evidence that rural dwellers are financially illiterate and it would require the policymakers or non-governmental organizations (NGOs) to establish a village or community group that comprises a wide range of bankers and government officials to help rural dwellers acquire some financial skills. Also, the positive relationship between media (whether respondent watches or listens to educational programs) and financial literacy implies that policymakers should focus on improving individuals' financial knowledge through training programs and utilize the media as a channel to propagate financial education to the public.Originality/valueAlthough previous studies have examined the determinants of financial literacy, little is known in developing countries and, in particular, rural communities. The authors fill this gap by contributing to the scanty existing literature in developing countries in several ways. First, this is the first study to examine the financial literacy level of rural dwellers in Ghana. Second, to not undermine the credibility of the estimation results, this study addresses the potential endogeneity issue, which other researchers have not adequately recognized. Finally, the study expands the scant literature on the subject and provides critical policy implications that will help policymakers formulate financial market policies that will contribute to rural dwellers financial literacy enhancement.


2018 ◽  
Vol 25 (2) ◽  
pp. 239-250 ◽  
Author(s):  
Dung Nguyen ◽  
Hoai Nguyen ◽  
Kien S. Nguyen

Purpose The purpose of this paper is to investigate the simultaneous relationship among ownership concentration, innovation and firm performance of the small- and medium-sized enterprises (SMEs) in Vietnam during the 2011–2015. By employing a Conditional Mixed Process (CMP) model, the findings show that: there is no impact of ownership concentration on innovation, but it has a positive impact on sales growth; innovation positively affects firm performance; and there exists a positively reverse causality from sales growth to innovation. Design/methodology/approach In this study, the authors propose the adaption of CMP model (Roodman, 2011). The nature of the first stage dependent variable – Innovation – is a binary one while the dependent variable Performance is continuous. Therefore, a model that can adapt the binary nature of the dependent variable and perform the estimation of a system of equations such as CMP model is preferred. The CMP framework is substantially that of seemingly unrelated regression, but with application in a larger scope. This approach is based on a “simulated maximum likelihood method” suggested by Geweke–Hajivassiliou–Keane algorithm. Findings By applying CMP method, this study examines the simultaneous relationship among ownership concentration, innovation and firm performance of the SMEs in Vietnam from 2011 to 2015. The findings indicate that: there is no impact of ownership concentration on innovation, but it has a positive impact on sales growth; innovation positively affects firm performance; and there exists a positively reverse causality from sales growth to innovation. Research limitations/implications In spite of the efforts to explore the simultaneous relationship among ownership concentration, innovation and firm performance of the SMEs in Vietnam, the study still has some limitations which are promising further research directions. First, the SME surveys by Central Institute for Economic Management do not have much information about other types of ownership including state-owned and foreign ownership. Therefore, possible further studies with richer data sets may explore the impacts of different types of ownership on firm innovation and performance. Second, other types of innovation such as organizational innovation, marketing innovation can also be investigated in further studies in a richer data set for the case of Vietnam SMEs. Originality/value The findings show that: there is no impact of ownership concentration on innovation, but it has a positive impact on sales growth; innovation positively affects firm performance; and there exists a positively reverse causality from sales growth to innovation. The policy implications insist on facilitating SMEs with easier access to capital via loans with preferred interest or trust loans without collateral, training programs for the labor force and SME leaders, and reduction of unnecessary administrative procedure.


2021 ◽  
Author(s):  
ElMehdi SAOUDI ◽  
Said Jai Andaloussi

Abstract With the rapid growth of the volume of video data and the development of multimedia technologies, it has become necessary to have the ability to accurately and quickly browse and search through information stored in large multimedia databases. For this purpose, content-based video retrieval ( CBVR ) has become an active area of research over the last decade. In this paper, We propose a content-based video retrieval system providing similar videos from a large multimedia data-set based on a query video. The approach uses vector motion-based signatures to describe the visual content and uses machine learning techniques to extract key-frames for rapid browsing and efficient video indexing. We have implemented the proposed approach on both, single machine and real-time distributed cluster to evaluate the real-time performance aspect, especially when the number and size of videos are large. Experiments are performed using various benchmark action and activity recognition data-sets and the results reveal the effectiveness of the proposed method in both accuracy and processing time compared to state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document