redundant data
Recently Published Documents


TOTAL DOCUMENTS

436
(FIVE YEARS 167)

H-INDEX

20
(FIVE YEARS 5)

Author(s):  
Jie Cheng ◽  
Bingjie Lin ◽  
Jiahui Wei ◽  
Ang Xia

In order to solve the problem of low security of data in network transmission and inaccurate prediction of future security situation, an improved neural network learning algorithm is proposed in this paper. The algorithm makes up for the shortcomings of the standard neural network learning algorithm, eliminates the redundant data by vector support, and realizes the effective clustering of information data. In addition, the improved neural network learning algorithm uses the order of data to optimize the "end" data in the standard neural network learning algorithm, so as to improve the accuracy and computational efficiency of network security situation prediction.MATLAB simulation results show that the data processing capacity of support vector combined BP neural network is consistent with the actual security situation data requirements, the consistency can reach 98%. the consistency of the security situation results can reach 99%, the composite prediction time of the whole security situation is less than 25s, the line segment slope change can reach 2.3% ,and the slope change range can reach 1.2%,, which is better than BP neural network algorithm.


Author(s):  
Min Fang

At present, the hotel resource retrieval algorithm has the problem of low retrieval efficiency, low accuracy, low security and high energy consumption, and this study proposes a large scale hotel resource retrieval algorithm based on characteristic threshold extraction. In the large-scale hotel resource data, the mass sequence is decomposed into periodic component, trend component, random error component and burst component. Different components are extracted, the singular point detection is realized by the extraction results, and the abnormal data in the hotel resource data are obtained. Based on the attribute of hotel resource data, the data similarity is processed with variable window, the total similarity of data is obtained, and the abnormal detection of redundant resource data is realized. The abnormal data detection results and redundant data detection results are substituted into the space-time filter, and the data processing is completed. The retrieval problem is identified, and the data processing results are replaced in the hotel resource retrieval based on the characteristic threshold extraction to achieve the normalization of data source and rule knowledge. The characteristic threshold and retrieval strategy are determined, and data fusion reasoning is carried out. After repeated iteration, effective solutions are obtained. The effective solution is fused to get the best retrieval result. Experimental results showed that the algorithm has higher retrieval accuracy, efficiency and security coefficient, and the average search energy consumption is 56n J/bit.


2022 ◽  
Vol 72 (1) ◽  
pp. 114-121
Author(s):  
Sudarsana Reddy Karnati ◽  
Lakshmi Boppanna ◽  
D. R. Jahagirdar

The on-board telemetry system of an aerospace vehicle sends the vehicle performance parameters to the ground receiving station at all instances of its trajectory. During the course of its trajectory, the communication channel of a long range vehicle, experiences various phenomena such as plume attenuation, stage separation, manoeuvring of a vehicle and RF blackout, causing loss of valuable telemetry data. The loss of communication link is inevitable due to these harsh conditions even when using the space diversity of ground receiving systems. Conventional telemetry systems do not provide redundant data for long range aerospace vehicles. This research work proposes an innovative delay data transmission, frame switchover and multiple frames data transmission schemes to improve the availability of telemetry data at ground receiving stations. The proposed innovative schemes are modelled using VHDL and extensive simulations have been performed to validate the results. The functionally simulated net list has been synthesised with 130 nm ACTEL flash based FPGA and verified on telemetry hardware.


2022 ◽  
pp. 27-50
Author(s):  
Shilpi Hiteshkumar Parikh ◽  
Anushka Gaurang Sandesara ◽  
Chintan Bhatt

Network attacks are continuously surging, and attackers keep on changing their ways in penetrating a system. A network intrusion detection system is created to monitor traffic in the network and to warn regarding the breach in security by invading foreign entities in the network. Specific experiments have been performed on the NSL-KDD dataset instead of the KDD dataset because it does not have redundant data so the output produced from classifiers will not be biased. The main types of attacks are divided into four categories: denial of service (DoS), probe attack, user to root attack (U2R), remote to local attack (R2L). Overall, this chapter proposes an intense study on linear and ensemble models such as logistic regression, stochastic gradient descent (SGD), naïve bayes, light GBM (LGBM), and XGBoost. Lastly, a stacked model is developed that is trained on the above-mentioned classifiers, and it is applied to detect intrusion in networks. From the plethora of approaches taken into consideration, the authors have found maximum accuracy (98.6%) from stacked model and XGBoost.


2021 ◽  
Vol 8 (4) ◽  
pp. 1-19
Author(s):  
Xuejiao Kang ◽  
David F. Gleich ◽  
Ahmed Sameh ◽  
Ananth Grama

As parallel and distributed systems scale, fault tolerance is an increasingly important problem—particularly on systems with limited I/O capacity and bandwidth. Erasure coded computations address this problem by augmenting a given problem instance with redundant data and then solving the augmented problem in a fault oblivious manner in a faulty parallel environment. In the event of faults, a computationally inexpensive procedure is used to compute the true solution from a potentially fault-prone solution. These techniques are significantly more efficient than conventional solutions to the fault tolerance problem. In this article, we show how we can minimize, to optimality, the overhead associated with our problem augmentation techniques for linear system solvers. Specifically, we present a technique that adaptively augments the problem only when faults are detected. At any point in execution, we only solve a system whose size is identical to the original input system. This has several advantages in terms of maintaining the size and conditioning of the system, as well as in only adding the minimal amount of computation needed to tolerate observed faults. We present, in detail, the augmentation process, the parallel formulation, and evaluation of performance of our technique. Specifically, we show that the proposed adaptive fault tolerance mechanism has minimal overhead in terms of FLOP counts with respect to the original solver executing in a non-faulty environment, has good convergence properties, and yields excellent parallel performance. We also demonstrate that our approach significantly outperforms an optimized application-level checkpointing scheme that only checkpoints needed data structures.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 306
Author(s):  
Jyrki Kullaa

Structural health monitoring (SHM) with a dense sensor network and repeated vibration measurements produces lots of data that have to be stored. If the sensor network is redundant, data compression is possible by storing the signals of selected Bayesian virtual sensors only, from which the omitted signals can be reconstructed with higher accuracy than the actual measurement. The selection of the virtual sensors for storage is done individually for each measurement based on the reconstruction accuracy. Data compression and reconstruction for SHM is the main novelty of this paper. The stored and reconstructed signals are used for damage detection and localization in the time domain using spatial or spatiotemporal correlation. Whitening transformation is applied to the training data to take the environmental or operational influences into account. The first principal component of the residuals is used to localize damage and also to design the extreme value statistics control chart for damage detection. The proposed method was studied with a numerical model of a frame structure with a dense accelerometer or strain sensor network. Only five acceleration or three strain signals out of the total 59 signals were stored. The stored and reconstructed data outperformed the raw measurement data in damage detection and localization.


2021 ◽  
pp. 1-12
Author(s):  
S. Jacophine Susmi

Gene expression profiles are sequences of numbers, and the need to analyze them has now increased significantly. Gene expression data contain a large number of genes and models used for cancer classification. As the wealth of these data being produced, new prediction, classification and clustering techniques are applied to the analysis of the data. Although there are a number of proposed methods with good results, there is still limited diagnostics and a lot of problems still to be solved. To solve the difficulty, in this paper, an efficient gene expression data classification is proposed. To predict the cancer class of patients from the gene expression profile, this paper presents a novel classification framework in the manner of three steps namely, Pre-processing, feature selection and classification. In pre-processing, missing value is filled and redundant data are removed. To attain the enhanced classification outcomes, the important features are selected from the database with the help of Adaptive Salp Swarm Optimization (ASSO) algorithm. Then, the selected features are given to the multi kernel SVM (MKSVM) to classify the gene expression data namely, BRCA, KIRC, COAD, LUAD and PRAD. The performance of proposed methodology is analyzed in terms of different metrics namely, accuracy, sensitivity and specificity. The performance of proposed methodology is 4.5% better than existing method in terms of accuracy.


2021 ◽  
Vol 9 ◽  
Author(s):  
Aiping Yao ◽  
Pengfei Yang ◽  
Mingjuan Ma ◽  
Yunfeng Pei

Elongated conductors, such as pacemaker leads, can couple to the MRI radio-frequency (RF) field during MRI scan and cause dangerous tissue heating. By selecting proper RF exposure conditions, the RF-induced power deposition can be suppressed. As the RF-induced power deposition is a complex function of multiple clinical factors, the problem remains how to perform the exposure selection in a comprehensive and efficient way. The purpose of this work is to demonstrate an exposure optimization trail that allows a comprehensive optimization in an efficient and traceable manner. The proposed workflow is demonstrated with a generic 40 cm long cardio pacemaker, major components of the clinical factors are decoupled from the redundant data set using principle component analysis, the optimized exposure condition can not only reduce the in vivo power deposition but also maintain good image quality.


2021 ◽  
Author(s):  
Barbara Ramsak ◽  
Ulrich Kuck ◽  
Eckhard Hofmann

Mating type (MAT) loci are the most important and significant regulators of sexual reproduction and development in ascomycetous fungi. Usually, they encode two transcription factors (TFs), named MAT1-1-1 or MAT1-2-1. Mating-type strains carry only one of the two TF genes, which control expression of pheromone and pheromone receptor genes, involved in the cell-cell recognition process. The present work presents the crystallization for the alpha1 (α1) domain of MAT1-1-1 from the human pathogenic fungus Aspergillus fumigatus (AfMAT1-1-1). Crystals were obtained for the complex between a polypeptide containing the α1 domain and DNA carrying the AfMAT1-1-1 recognition sequence. A streak seeding technique was applied to improve native crystal quality, resulting in diffraction data to 3.2 Å resolution. Further, highly redundant data sets were collected from the crystals of selenomethionine-substituted AfMAT1-1-1 with a maximum resolution of 3.2 Å. This is the first report of structural studies on the α1 domain MAT regulator involved in the mating of ascomycetes.


Computation ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 131
Author(s):  
Murat Mustafin ◽  
Dmitry Bykasov

Due to the huge amount of redundant data, the problem arises of finding a single integral solution that will satisfy numerous possible accuracy options. Mathematical processing of such measurements by traditional geodetic methods can take significant time and at the same time does not provide the required accuracy. This article discusses the application of nonlinear programming methods in the computational process for geodetic data. Thanks to the development of computer technology, a modern surveyor can solve new emerging production problems using nonlinear programming methods—preliminary computational experiments that allow evaluating the effectiveness of a particular method for solving a specific problem. The efficiency and performance comparison of various nonlinear programming methods in the course of trilateration network equalization on a plane is shown. An algorithm of the modified second-order Newton’s method is proposed, based on the use of the matrix of second partial derivatives and the Powell and the Davis–Sven–Kempy (DSK) method in the computational process. The new method makes it possible to simplify the computational process, allows the user not to calculate the preliminary values of the determined parameters with high accuracy, since the use of this method makes it possible to expand the region of convergence of the problem solution.


Sign in / Sign up

Export Citation Format

Share Document