scholarly journals Design and Development of Ternary-Based Anomaly Detection in Semantic Graphs Using Metaheuristic Algorithm

2021 ◽  
Vol 13 (5) ◽  
pp. 43-64
Author(s):  
M. Sravan Kumar Reddy ◽  
Dharmendra Singh Rajput

At present, the field of homeland security faces many obstacles while determining abnormal or suspicious entities within the huge set of data. Several approaches have been adopted from social network analysis and data mining; however, it is challenging to identify the objective of abnormal instances within the huge complicated semantic graphs. The abnormal node is the one that takes an individual or abnormal semantic in the network. Hence, for defining this notion, a graph structure is implemented for generating the semantic profile of each node by numerous kinds of nodes and links that are associated to the node in a specific distance via edges. Once the graph structure is framed, the ternary list is formed on the basis of its adjacent nodes. The abnormalities in the nodes are detected by introducing a new optimization concept referred to as biogeography optimization with fitness sorted update (BO-FBU), which is the extended version of the standard biogeography optimization algorithm (BBO). The abnormal behavior in the network is identified by the similarities among the derived rule features. Further, the performance of the proposed model is compared to the other classical models in terms of certain performance measures. These techniques will be useful to detect digital crime and forensics.

2018 ◽  
Vol 140 (8) ◽  
Author(s):  
M. Chiumenti ◽  
M. Cervera ◽  
E. Salsi ◽  
A. Zonato

In this work, a novel phenomenological model is proposed to study the liquid-to-solid phase change of eutectic and hypoeutectic alloy compositions. The objective is to enhance the prediction capabilities of the solidification models based on a-priori definition of the solid fraction as a function of the temperature field. However, the use of models defined at the metallurgical level is avoided to minimize the number of material parameters required. This is of great industrial interest because, on the one hand, the classical models are not able to predict recalescence and undercooling phenomena, and, on the other hand, the complexity as well as the experimental campaign necessary to feed most of the microstructure models available in the literature make their calibration difficult and very dependent on the chemical composition and the treatment of the melt. Contrarily, the proposed model allows for an easy calibration by means of few parameters. These parameters can be easily extracted from the temperature curves recorded at the hot spot of the quick cup test, typically used in the differential thermal analysis (DTA) for the quality control of the melt just before pouring. The accuracy of the numerical results is assessed by matching the temperature curves obtained via DTA of eutectic and hypoeutectic alloys. Moreover, the model is validated in more complex casting experiments where the temperature is measured at different thermocouple locations and the metallurgical features such as grain size and nucleation density are obtained from an exhaustive micrography campaign. The remarkable agreement with the experimental evidence validates the predicting capabilities of the proposed model.


2014 ◽  
Vol 6 (1) ◽  
pp. 1032-1035 ◽  
Author(s):  
Ramzi Suleiman

The research on quasi-luminal neutrinos has sparked several experimental studies for testing the "speed of light limit" hypothesis. Until today, the overall evidence favors the "null" hypothesis, stating that there is no significant difference between the observed velocities of light and neutrinos. Despite numerous theoretical models proposed to explain the neutrinos behavior, no attempt has been undertaken to predict the experimentally produced results. This paper presents a simple novel extension of Newton's mechanics to the domain of relativistic velocities. For a typical neutrino-velocity experiment, the proposed model is utilized to derive a general expression for . Comparison of the model's prediction with results of six neutrino-velocity experiments, conducted by five collaborations, reveals that the model predicts all the reported results with striking accuracy. Because in the proposed model, the direction of the neutrino flight matters, the model's impressive success in accounting for all the tested data, indicates a complete collapse of the Lorentz symmetry principle in situation involving quasi-luminal particles, moving in two opposite directions. This conclusion is support by previous findings, showing that an identical Sagnac effect to the one documented for radial motion, occurs also in linear motion.


2020 ◽  
Vol 23 (4) ◽  
pp. 274-284 ◽  
Author(s):  
Jingang Che ◽  
Lei Chen ◽  
Zi-Han Guo ◽  
Shuaiqun Wang ◽  
Aorigele

Background: Identification of drug-target interaction is essential in drug discovery. It is beneficial to predict unexpected therapeutic or adverse side effects of drugs. To date, several computational methods have been proposed to predict drug-target interactions because they are prompt and low-cost compared with traditional wet experiments. Methods: In this study, we investigated this problem in a different way. According to KEGG, drugs were classified into several groups based on their target proteins. A multi-label classification model was presented to assign drugs into correct target groups. To make full use of the known drug properties, five networks were constructed, each of which represented drug associations in one property. A powerful network embedding method, Mashup, was adopted to extract drug features from above-mentioned networks, based on which several machine learning algorithms, including RAndom k-labELsets (RAKEL) algorithm, Label Powerset (LP) algorithm and Support Vector Machine (SVM), were used to build the classification model. Results and Conclusion: Tenfold cross-validation yielded the accuracy of 0.839, exact match of 0.816 and hamming loss of 0.037, indicating good performance of the model. The contribution of each network was also analyzed. Furthermore, the network model with multiple networks was found to be superior to the one with a single network and classic model, indicating the superiority of the proposed model.


Author(s):  
Zihang Wei ◽  
Yunlong Zhang ◽  
Xiaoyu Guo ◽  
Xin Zhang

Through movement capacity is an essential factor used to reflect intersection performance, especially for signalized intersections, where a large proportion of vehicle demand is making through movements. Generally, left-turn spillback is considered a key contributor to affect through movement capacity, and blockage to the left-turn bay is known to decrease left-turn capacity. Previous studies have focused primarily on estimating the through movement capacity under a lagging protected only left-turn (lagging POLT) signal setting, as a left-turn spillback is more likely to happen under such a condition. However, previous studies contained assumptions (e.g., omit spillback), or were dedicated to one specific signal setting. Therefore, in this study, through movement capacity models based on probabilistic modeling of spillback and blockage scenarios are established under four different signal settings (i.e., leading protected only left-turn [leading POLT], lagging left-turn, protected plus permitted left-turn, and permitted plus protected left-turn). Through microscopic simulations, the proposed models are validated, and compared with existing capacity models and the one in the Highway Capacity Manual (HCM). The results of the comparisons demonstrate that the proposed models achieved significant advantages over all the other models and obtained high accuracies in all signal settings. Each proposed model for a given signal setting maintains consistent accuracy across various left-turn bay lengths. The proposed models of this study have the potential to serve as useful tools, for practicing transportation engineers, when determining the appropriate length of a left-turn bay with the consideration of spillback and blockage, and the adequate cycle length with a given bay length.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1815
Author(s):  
Diego I. Gallardo ◽  
Mário de Castro ◽  
Héctor W. Gómez

A cure rate model under the competing risks setup is proposed. For the number of competing causes related to the occurrence of the event of interest, we posit the one-parameter Bell distribution, which accommodates overdispersed counts. The model is parameterized in the cure rate, which is linked to covariates. Parameter estimation is based on the maximum likelihood method. Estimates are computed via the EM algorithm. In order to compare different models, a selection criterion for non-nested models is implemented. Results from simulation studies indicate that the estimation method and the model selection criterion have a good performance. A dataset on melanoma is analyzed using the proposed model as well as some models from the literature.


2021 ◽  
pp. 108128652110258
Author(s):  
Yi-Ying Feng ◽  
Xiao-Jun Yang ◽  
Jian-Gen Liu ◽  
Zhan-Qing Chen

The general fractional operator shows its great predominance in the construction of constitutive model owing to its agility in choosing the embedded parameters. A generalized fractional viscoelastic–plastic constitutive model with the sense of the k-Hilfer–Prabhakar ( k-H-P) fractional operator, which has the character recovering the known classical models from the proposed model, is established in this article. In order to describe the damage in the creep process, a time-varying elastic element [Formula: see text] is used in the proposed model with better representation of accelerated creep stage. According to the theory of the kinematics of deformation and the Laplace transform, the creep constitutive equation and the strain of the modified model are established and obtained. The validity and rationality of the proposed model are identified by fitting with the experimental data. Finally, the influences of the fractional derivative order [Formula: see text] and parameter k on the creep process are investigated through the sensitivity analyses with two- and three-dimensional plots.


2021 ◽  
Vol 7 ◽  
pp. e505
Author(s):  
Noha Ahmed Bayomy ◽  
Ayman E. Khedr ◽  
Laila A. Abd-Elmegid

The one constant in the world is change. The changing dynamics of business environment enforces the organizations to re-design or reengineer their business processes. The main objective of such reengineering processes is to provide services or produce products with the possible lowest cost, shortest time, and best quality. Accordingly, Business Process Re-engineering (BPR) provides a roadmap of how to efficiently achieve the operational goals in terms of enhanced flexibility and productivity, reduced cost, and improved quality of service or product. In this article, we propose an efficient model for BPR. The model specifies where the breakdowns occur in BPR implementation, justifies why such breakdowns occur, and proposes techniques to prevent their occurrence again. The proposed model has been built based on two main sections. The first section focuses on integrating Critical Success Factors (CSFs) and the performance of business processes during the reengineering processes. Additionally, it implements the association rule mining technique to investigate the relationship between CSFs and different business processes. The second section aims to measure the performance of business processes (intended success of BPR) by process time, cycle time, quality and cost before and after reengineering processes. A case study of the Egyptian Tax Authority (ETA) is used to test the efficiency of the proposed model.


Author(s):  
G. P. Ong ◽  
T. F. Fwa ◽  
J. Guo

Hydroplaning on wet pavement occurs when a vehicle reaches a critical speed and causes a loss of contact between its tires and the pavement surface. This paper presents the development of a three-dimensional finite volume model that simulates the hydroplaning phenomenon. The theoretical considerations of the flow simulation model are described. The simulation results are in good agreement with the experimental results in the literature and with those obtained by the well-known hydroplaning equation of the National Aeronautics and Space Administration (NASA). The tire pressure–hydroplaning speed relationship predicted by the model is found to match well the one obtained with the NASA hydroplaning equation. Analyses of the results of the present study indicate that pavement microtexture in the 0.2- to 0.5-mm range can delay hydroplaning (i.e., raise the speed at which hydroplaning occurs). The paper also shows that the NASA hydroplaning equation provides a conservative estimate of the hydroplaning speed. The analyses in the present study indicate that when the microtexture of the pavement is considered, the hydroplaning speed predicted by the proposed model deviates from the speed predicted by the smooth surface relationship represented by the NASA hydroplaning equation. The discrepancies in hydroplaning speed are about 1% for a 0.1-mm microtexture depth and 22% for a 0.5-mm microtexture depth. The validity of the proposed model was verified by a check of the computed friction coefficient against the experimental results reported in the literature for pavement surfaces with known microtexture depths.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4635
Author(s):  
Angel de la Torre ◽  
Santiago Medina-Rodríguez ◽  
Jose C. Segura ◽  
Jorge F. Fernández-Sánchez

In this work, we propose a new model describing the relationship between the analyte concentration and the instrument response in photoluminescence sensors excited with modulated light sources. The concentration is modeled as a polynomial function of the analytical signal corrected with an exponent, and therefore the model is referred to as a polynomial-exponent (PE) model. The proposed approach is motivated by the limitations of the classical models for describing the frequency response of the luminescence sensors excited with a modulated light source, and can be considered as an extension of the Stern–Volmer model. We compare the calibration provided by the proposed PE-model with that provided by the classical Stern–Volmer, Lehrer, and Demas models. Compared with the classical models, for a similar complexity (i.e., with the same number of parameters to be fitted), the PE-model improves the trade-off between the accuracy and the complexity. The utility of the proposed model is supported with experiments involving two oxygen-sensitive photoluminescence sensors in instruments based on sinusoidally modulated light sources, using four different analytical signals (phase-shift, amplitude, and the corresponding lifetimes estimated from them).


2020 ◽  
Vol 10 (7) ◽  
pp. 2421
Author(s):  
Bencheng Yan ◽  
Chaokun Wang ◽  
Gaoyang Guo

Recently, graph neural networks (GNNs) have achieved great success in dealing with graph-based data. The basic idea of GNNs is iteratively aggregating the information from neighbors, which is a special form of Laplacian smoothing. However, most of GNNs fall into the over-smoothing problem, i.e., when the model goes deeper, the learned representations become indistinguishable. This reflects the inability of the current GNNs to explore the global graph structure. In this paper, we propose a novel graph neural network to address this problem. A rejection mechanism is designed to address the over-smoothing problem, and a dilated graph convolution kernel is presented to capture the high-level graph structure. A number of experimental results demonstrate that the proposed model outperforms the state-of-the-art GNNs, and can effectively overcome the over-smoothing problem.


Sign in / Sign up

Export Citation Format

Share Document