Web Intelligence
Latest Publications


TOTAL DOCUMENTS

168
(FIVE YEARS 79)

H-INDEX

7
(FIVE YEARS 4)

Published By Ios Press

2405-6464, 2405-6456

2022 ◽  
pp. 1-12
Author(s):  
Jingyi Li

Traditional financial data storage methods are prone to data leakage and narrow data coverage. Therefore, this paper proposes a dynamic and secure storage method of financial data based on cloud platform.In order to improve the ability of enterprise data management, the paper constructs a financial cloud computing platform, mining financial data by rough set theory, and analyzing the results of frequent pattern mining of financial data by fuzzy attribute characteristics.According to the granularity theory, the financial data is classified and processed, and the CSA cloud risk model is established to realize the dynamic and secure storage of financial data.The experimental results show that. The maximum data storage delay of this method is no more than 4.1 s, the maximum data leakage risk coefficient is no more than 0.5, the number of data types can reach 30, and the data storage coverage is improved.


2022 ◽  
pp. 1-10
Author(s):  
Huixian Wang ◽  
Hongjiang Zheng

This paper proposes a deep mining method of high-dimensional abnormal data in Internet of things based on improved ant colony algorithm. Preprocess the high-dimensional abnormal data of the Internet of things and extract the data correlation feature quantity; The ant colony algorithm is improved by updating the pheromone and state transition probability; With the help of the improved ant colony algorithm, the feature response signal of high-dimensional abnormal data in Internet of things is extracted, the judgment threshold of high-dimensional abnormal data in Internet of things is determined, and the objective function is constructed to optimize the mining depth, so as to realize the deep data mining. The results show that the average error of the proposed method is only 0.48%.


2022 ◽  
pp. 1-22
Author(s):  
Vhatkar Kapil Netaji ◽  
G.P. Bhole

The allocation of resources in the cloud environment is efficient and vital, as it directly impacts versatility and operational expenses. Containers, like virtualization technology, are gaining popularity due to their low overhead when compared to traditional virtual machines and portability. The resource allocation methodologies in the containerized cloud are intended to dynamically or statically allocate the available pool of resources such as CPU, memory, disk, and so on to users. Despite the enormous popularity of containers in cloud computing, no systematic survey of container scheduling techniques exists. In this survey, an outline of the present works on resource allocation in the containerized cloud correlative is discussed. In this work, 64 research papers are reviewed for a better understanding of resource allocation, management, and scheduling. Further, to add extra worth to this research work, the performance of the collected papers is investigated in terms of various performance measures. Along with this, the weakness of the existing resource allocation algorithms is provided, which makes the researchers to investigate with novel algorithms or techniques.


2021 ◽  
pp. 1-14
Author(s):  
Lin Li ◽  
Sijie Long ◽  
Jianxiu Bi ◽  
Guowei Wang ◽  
Jianwei Zhang ◽  
...  

Learning based credit prediction has attracted great interest from academia and industry. Different institutions hold a certain amount of credit data with limited users to build model. An institution has the requirement to obtain data from other institutions for improving model performance. However, due to the privacy protection and subject to legal restrictions, they encounter difficulties in data exchange. This affects the performance of credit prediction. In order to solve the above problem, this paper proposes a federated learning based semi-supervised credit prediction approach enhanced by multi-layer label mean, which can aggregate parameters of each institution via joint training while protecting the data privacy of each institution. Moreover, in actual production and life, there are usually more unlabeled credit data than labeled ones, and the distribution of their feature space presents multiple data-dense divisions. To deal with these, local meanNet model is proposed with a multi-layer label mean based semi-supervised deep learning network. In addition, this paper introduces a cost-sensitive loss function in the supervised part of the local mean model. Conducted on two public credit datasets, experimental results show that our proposed federated learning based approach has achieved promising credit prediction performance in terms of Accuracy and F1 measures. At the same time, the framework design mode that splits data aggregation and keys uniformly can improve the security of data privacy and enhance the flexibility of model training.


2021 ◽  
pp. 1-19
Author(s):  
Nagaraju Pamarthi ◽  
N. Nagamalleswara Rao

The innovative trend of cloud computing is outsourcing data to the cloud servers by individuals or enterprises. Recently, various techniques are devised for facilitating privacy protection on untrusted cloud platforms. However, the classical privacy-preserving techniques failed to prevent leakage and cause huge information loss. This paper devises a novel methodology, namely the Exponential-Ant-lion Rider optimization algorithm based bilinear map coefficient Generation (Exponential-AROA based BMCG) method for privacy preservation in cloud infrastructure. The proposed Exponential-AROA is devised by integrating Exponential weighted moving average (EWMA), Ant Lion optimizer (ALO), and Rider optimization algorithm (ROA). The input data is fed to the privacy preservation process wherein the data matrix, and bilinear map coefficient Generation (BMCG) coefficient are multiplied through Hilbert space-based tensor product. Here, the bilinear map coefficient is obtained by multiplying the original data matrix and with modified elliptical curve cryptography (MECC) encryption to maintain data security. The bilinear map coefficient is used to handle both the utility and the sensitive information. Hence, an optimization-driven algorithm is utilized to evaluate the optimal bilinear map coefficient. Here, the fitness function is newly devised considering privacy and utility. The proposed Exponential-AROA based BMCG provided superior performance with maximal accuracy of 94.024%, maximal fitness of 1, and minimal Information loss of 5.977%.


2021 ◽  
pp. 1-11
Author(s):  
Naiyue Chen ◽  
Yi Jin ◽  
Yinglong Li ◽  
Luxin Cai

With the rapid development of social networks and the massive popularity of intelligent mobile terminals, network anomaly detection is becoming increasingly important. In daily work and life, edge nodes store a large number of network local connection data and audit data, which can be used to analyze network abnormal behavior. With the increasingly close network communication, the amount of network connection and other related data collected by each network terminal is increasing. Machine learning has become a classification method to analyze the features of big data in the network. Face to the problems of excessive data and long response time for network anomaly detection, we propose a trust-based Federated learning anomaly detection algorithm. We use the edge nodes to train the local data model, and upload the machine learning parameters to the central node. Meanwhile, according to the performance of edge nodes training, we set different weights to match the processing capacity of each terminal which will obtain faster convergence speed and better attack classification accuracy. The user’s private information will only be processed locally and will not be uploaded to the central server, which can reduce the risk of information disclosure. Finally, we compare the basic federated learning model and TFCNN algorithm on KDD Cup 99 dataset and MNIST dataset. The experimental results show that the TFCNN algorithm can improve accuracy and communication efficiency.


2021 ◽  
Vol 19 (1-2) ◽  
pp. 41-61
Author(s):  
Hanumantha Rao Nadendla ◽  
A. Srikrishna ◽  
K. Gangadhara Rao

Image classification is the classical issue in computer vision, machine learning, and image processing. The image classification is measured by differentiating the image into the prescribed category based on the content of the vision. In this paper, a novel classifier named RideSFO-NN is developed for image classification. The proposed method performs the image classification by undergoing two steps, namely feature extraction and classification. Initially, the images from various sources are provided to the proposed Weighted Shape-Size Pattern Spectra for pattern analysis. From the pattern analysis, the significant features are obtained for the classification. Here, the proposed Weighted Shape-Size Pattern Spectra is designed by modifying the gray-scale decomposition with Weight-Shape decomposition. Then, the classification is done based on Neural Network (NN) classifier, which is trained using an optimization approach. The optimization will be done by the proposed Ride Sunflower optimization (RideSFO) algorithm, which is the integration of Rider optimization algorithm (ROA), and Sunflower optimization algorithm (SFO). Finally, the image classification performance is evaluated using RideSFO-NN based on sensitivity, specificity, and accuracy. The developed RideSFO-NN method achieves the maximal accuracy of 94%, maximal sensitivity of 93.87%, and maximal specificity of 90.52% based on K-Fold.


2021 ◽  
pp. 1-12
Author(s):  
Yinghua Feng ◽  
Wei Yang

In order to overcome the problems of high energy consumption and low execution efficiency of traditional Internet of things (IOT) packet loss rate monitoring model, a new packet loss rate monitoring model based on differential evolution algorithm is proposed. The similarity between each data point in the data space of the Internet of things is set as the data gravity. On the basis of the data gravity, combined with the law of gravity in the data space, the gravity of different data is calculated. At the same time, the size of the data gravity is compared, and the data are classified. Through the classification results, the packet loss rate monitoring model of the Internet of things is established. Differential evolution algorithm is used to solve the model to obtain the best monitoring scheme to ensure the security of network data transmission. The experimental results show that the proposed model can effectively reduce the data acquisition overhead and energy consumption, and improve the execution efficiency of the model. The maximum monitoring efficiency is 99.74%.


2021 ◽  
pp. 1-12
Author(s):  
Li Qian

In order to overcome the low classification accuracy of traditional methods, this paper proposes a new classification method of complex attribute big data based on iterative fuzzy clustering algorithm. Firstly, principal component analysis and kernel local Fisher discriminant analysis were used to reduce dimensionality of complex attribute big data. Then, the Bloom Filter data structure is introduced to eliminate the redundancy of the complex attribute big data after dimensionality reduction. Secondly, the redundant complex attribute big data is classified in parallel by iterative fuzzy clustering algorithm, so as to complete the complex attribute big data classification. Finally, the simulation results show that the accuracy, the normalized mutual information index and the Richter’s index of the proposed method are close to 1, the classification accuracy is high, and the RDV value is low, which indicates that the proposed method has high classification effectiveness and fast convergence speed.


2021 ◽  
pp. 1-11
Author(s):  
Xianghong Li

In order to overcome the problems of traditional link network sensitive data anti tampering operation, such as long time-consuming and low data security, a tamper proof model of link network sensitive data based on blockchain technology is proposed. Calculate the evenly distributed random variables of sensitive node data and the difference of running distance to obtain the probability of meeting the sensitive data with other neighbor nodes, and determine the sensitive data in the link network; obtain the frequency domain of the sensitive data of the infected link network through the square difference function, and calculate the membership mean value of the infected data samples in the sensitive data; analyze the working principle of blockchain technology, Set the master key and public key of sensitive data, generate the encryption key of sensitive data of link network, and use blockchain technology to complete the design of tamper proof model of sensitive data in link network. The experimental results show that the shortest time-consuming of the proposed method is about 1 s, and the maximum tamper proof security factor is about 9.7.


Sign in / Sign up

Export Citation Format

Share Document