scholarly journals Classification and pattern extraction of incidents: a deep learning-based approach

Author(s):  
Sobhan Sarkar ◽  
Sammangi Vinay ◽  
Chawki Djeddi ◽  
J. Maiti

AbstractClassifying or predicting occupational incidents using both structured and unstructured (text) data are an unexplored area of research. Unstructured texts, i.e., incident narratives are often unutilized or underutilized. Besides the explicit information, there exist a large amount of hidden information present in a dataset, which cannot be explored by the traditional machine learning (ML) algorithms. There is a scarcity of studies that reveal the use of deep neural networks (DNNs) in the domain of incident prediction, and its parameter optimization for achieving better prediction power. To address these issues, initially, key terms are extracted from the unstructured texts using LDA-based topic modeling. Then, these key terms are added with the predictor categories to form the feature vector, which is further processed for noise reduction and fed to the adaptive moment estimation (ADAM)-based DNN (i.e., ADNN) for classification, as ADAM is superior to GD, SGD, and RMSProp. To evaluate the effectiveness of our proposed method, a comparative study has been conducted using some state-of-the-arts on five benchmark datasets. Moreover, a case study of an integrated steel plant in India has been demonstrated for the validation of the proposed model. Experimental results reveal that ADNN produces superior performance than others in terms of accuracy. Therefore, the present study offers a robust methodological guide that enables us to handle the issues of unstructured data and hidden information for developing a predictive model.

2019 ◽  
Vol 46 (12) ◽  
pp. 1160-1173 ◽  
Author(s):  
Zinab Abuwarda ◽  
Tarek Hegazy

Fast-tracking is an important process to speed the delivery of construction projects. To support optimum fast-tracking decisions, this paper introduces a generic schedule optimization framework that integrates four schedule acceleration dimensions: linear activity crashing; discrete activity modes of execution; alternative network paths; and flexible activity overlapping. Because excessive schedule compression can lead to space congestion and overstressed workers, the optimization formulation uses specific variables and constraints to prevent simultaneous use of overlapping and crashing at the same activity segment. To handle complex projects with a variety of milestones, resource limits, and constraints, the framework has been implemented using the constraint programming (CP) technique. Comparison with a literature case study and further experimentation demonstrated the flexibility and superior performance of the proposed model. The novelty of the model stems from its integrated multi-dimensional formulation, its CP engine, and its ability to provide alternative fast-track schedules to strictly constrained projects without overstressing the construction workers.


Author(s):  
Xingbo Liu ◽  
Xiushan Nie ◽  
Yingxin Wang ◽  
Yilong Yin

Hashing can compress heterogeneous high-dimensional data into compact binary codes while preserving the similarity to facilitate efficient retrieval and storage, and thus hashing has recently received much attention from information retrieval researchers. Most of the existing hashing methods first predefine a fixed length (e.g., 32, 64, or 128 bit) for the hash codes before learning them with this fixed length. However, one sample can be represented by various hash codes with different lengths, and thus there must be some associations and relationships among these different hash codes because they represent the same sample. Therefore, harnessing these relationships will boost the performance of hashing methods. Inspired by this possibility, in this study, we propose a new model jointly multiple hash learning (JMH), which can learn hash codes with multiple lengths simultaneously. In the proposed JMH method, three types of information are used for hash learning, which come from hash codes with different lengths, the original features of the samples and label. In contrast to the existing hashing methods, JMH can learn hash codes with different lengths in one step. Users can select appropriate hash codes for their retrieval tasks according to the requirements in terms of accuracy and complexity. To the best of our knowledge, JMH is one of the first attempts to learn multi-length hash codes simultaneously. In addition, in the proposed model, discrete and closed-form solutions for variables can be obtained by cyclic coordinate descent, thereby making the proposed model much faster during training. Extensive experiments were performed based on three benchmark datasets and the results demonstrated the superior performance of the proposed method.


2020 ◽  
Author(s):  
Kai Zhang ◽  
Yuan Zhou ◽  
Zheng Chen ◽  
Yufei Liu ◽  
Zhuo Tang ◽  
...  

Abstract The prevalence of short texts on the Web has made mining the latent topic structures of short texts a critical and fundamental task for many applications. However, due to the lack of word co-occurrence information induced by the content sparsity of short texts, it is challenging for traditional topic models like latent Dirichlet allocation (LDA) to extract coherent topic structures on short texts. Incorporating external semantic knowledge into the topic modeling process is an effective strategy to improve the coherence of inferred topics. In this paper, we develop a novel topic model—called biterm correlation knowledge-based topic model (BCK-TM)—to infer latent topics from short texts. Specifically, the proposed model mines biterm correlation knowledge automatically based on recent progress in word embedding, which can represent semantic information of words in a continuous vector space. To incorporate external knowledge, a knowledge incorporation mechanism is designed over the latent topic layer to regularize the topic assignment of each biterm during the topic sampling process. Experimental results on three public benchmark datasets illustrate the superior performance of the proposed approach over several state-of-the-art baseline models.


2020 ◽  
Vol 34 (07) ◽  
pp. 12829-12836 ◽  
Author(s):  
Ling Zhang ◽  
Chengjiang Long ◽  
Xiaolong Zhang ◽  
Chunxia Xiao

Residual images and illumination estimation have been proved very helpful in image enhancement. In this paper, we propose a general and novel framework RIS-GAN which explores residual and illumination with Generative Adversarial Networks for shadow removal. Combined with the coarse shadow-removal image, the estimated negative residual images and inverse illumination maps can be used to generate indirect shadow-removal images to refine the coarse shadow-removal result to the fine shadow-free image in a coarse-to-fine fashion. Three discriminators are designed to distinguish whether the predicted negative residual images, shadow-removal images, and the inverse illumination maps are real or fake jointly compared with the corresponding ground-truth information. To our best knowledge, we are the first one to explore residual and illumination for shadow removal. We evaluate our proposed method on two benchmark datasets, i.e., SRD and ISTD, and the extensive experiments demonstrate that our proposed method achieves the superior performance to state-of-the-arts, although we have no particular shadow-aware components designed in our generators.


2020 ◽  
Vol 6 ◽  
pp. e280
Author(s):  
Bashir Muftah Ghariba ◽  
Mohamed S. Shehata ◽  
Peter McGuire

A human Visual System (HVS) has the ability to pay visual attention, which is one of the many functions of the HVS. Despite the many advancements being made in visual saliency prediction, there continues to be room for improvement. Deep learning has recently been used to deal with this task. This study proposes a novel deep learning model based on a Fully Convolutional Network (FCN) architecture. The proposed model is trained in an end-to-end style and designed to predict visual saliency. The entire proposed model is fully training style from scratch to extract distinguishing features. The proposed model is evaluated using several benchmark datasets, such as MIT300, MIT1003, TORONTO, and DUT-OMRON. The quantitative and qualitative experiment analyses demonstrate that the proposed model achieves superior performance for predicting visual saliency.


Author(s):  
D.S. Guru ◽  
K. Swarnalatha ◽  
N. Vinay Kumar ◽  
Basavaraj S. Anami

In this article, features are selected using feature clustering and ranking of features for imbalanced text data. Initially the text documents are represented in lower dimension using the term class relevance (TCR) method. The class wise clustering is recommended to balance the documents in each class. Subsequently, the clusters are treated as classes and the documents of each cluster are represented in the lower dimensional form using the TCR again. The features are clustered and for each feature cluster the cluster representative is selected and these representatives are used as selected features of the documents. Hence, this proposed model reduces the dimension to a smaller number of features. For selecting the cluster representative, four feature evaluation methods are used and classification is done by using SVM classifier. The performance of the method is compared with the global feature ranking method. The experiment is conducted on two benchmark datasets the Reuters-21578 and the TDT2 dataset. The experimental results show that this method performs well when compared to the other existing works.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Narjes Rohani ◽  
Changiz Eslahchi

Abstract Drug-Drug Interaction (DDI) prediction is one of the most critical issues in drug development and health. Proposing appropriate computational methods for predicting unknown DDI with high precision is challenging. We proposed "NDD: Neural network-based method for drug-drug interaction prediction" for predicting unknown DDIs using various information about drugs. Multiple drug similarities based on drug substructure, target, side effect, off-label side effect, pathway, transporter, and indication data are calculated. At first, NDD uses a heuristic similarity selection process and then integrates the selected similarities with a nonlinear similarity fusion method to achieve high-level features. Afterward, it uses a neural network for interaction prediction. The similarity selection and similarity integration parts of NDD have been proposed in previous studies of other problems. Our novelty is to combine these parts with new neural network architecture and apply these approaches in the context of DDI prediction. We compared NDD with six machine learning classifiers and six state-of-the-art graph-based methods on three benchmark datasets. NDD achieved superior performance in cross-validation with AUPR ranging from 0.830 to 0.947, AUC from 0.954 to 0.994 and F-measure from 0.772 to 0.902. Moreover, cumulative evidence in case studies on numerous drug pairs, further confirm the ability of NDD to predict unknown DDIs. The evaluations corroborate that NDD is an efficient method for predicting unknown DDIs. The data and implementation of NDD are available at https://github.com/nrohani/NDD.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15 ◽  
Author(s):  
Tinggui Chen ◽  
Shiwen Wu ◽  
Jianjun Yang ◽  
Guodong Cong ◽  
Gongfa Li

It is common that many roads in disaster areas are damaged and obstructed after sudden-onset disasters. The phenomenon often comes with escalated traffic deterioration that raises the time and cost of emergency supply scheduling. Fortunately, repairing road network will shorten the time of in-transit distribution. In this paper, according to the characteristics of emergency supplies distribution, an emergency supply scheduling model based on multiple warehouses and stricken locations is constructed to deal with the failure of part of road networks in the early postdisaster phase. The detailed process is as follows. When part of the road networks fail, we firstly determine whether to repair the damaged road networks, and then a model of reliable emergency supply scheduling based on bi-level programming is proposed. Subsequently, an improved artificial bee colony algorithm is presented to solve the problem mentioned above. Finally, through a case study, the effectiveness and efficiency of the proposed model and algorithm are verified.


2020 ◽  
pp. 002216782098214
Author(s):  
Tami Gavron

This article describes the significance of an art-based psychosocial intervention with a group of 9 head kindergarten teachers in Japan after the 2011 tsunami, as co-constructed by Japanese therapists and an Israeli arts therapist. Six core themes emerged from the analysis of a group case study: (1) mutual playfulness and joy, (2) rejuvenation and regaining control, (3) containment of a multiplicity of feelings, (4) encouragement of verbal sharing, (5) mutual closeness and support, and (6) the need to support cultural expression. These findings suggest that art making can enable coping with the aftermath of natural disasters. The co-construction underscores the value of integrating the local Japanese culture when implementing Western arts therapy approaches. It is suggested that art-based psychosocial interventions can elicit and nurture coping and resilience in a specific cultural context and that the arts and creativity can serve as a powerful humanistic form of posttraumatic care.


Author(s):  
Chen Qi ◽  
Shibo Shen ◽  
Rongpeng Li ◽  
Zhifeng Zhao ◽  
Qing Liu ◽  
...  

AbstractNowadays, deep neural networks (DNNs) have been rapidly deployed to realize a number of functionalities like sensing, imaging, classification, recognition, etc. However, the computational-intensive requirement of DNNs makes it difficult to be applicable for resource-limited Internet of Things (IoT) devices. In this paper, we propose a novel pruning-based paradigm that aims to reduce the computational cost of DNNs, by uncovering a more compact structure and learning the effective weights therein, on the basis of not compromising the expressive capability of DNNs. In particular, our algorithm can achieve efficient end-to-end training that transfers a redundant neural network to a compact one with a specifically targeted compression rate directly. We comprehensively evaluate our approach on various representative benchmark datasets and compared with typical advanced convolutional neural network (CNN) architectures. The experimental results verify the superior performance and robust effectiveness of our scheme. For example, when pruning VGG on CIFAR-10, our proposed scheme is able to significantly reduce its FLOPs (floating-point operations) and number of parameters with a proportion of 76.2% and 94.1%, respectively, while still maintaining a satisfactory accuracy. To sum up, our scheme could facilitate the integration of DNNs into the common machine-learning-based IoT framework and establish distributed training of neural networks in both cloud and edge.


Sign in / Sign up

Export Citation Format

Share Document