scholarly journals Multimedia Image Encryption Analysis Based on High-Dimensional Chaos Algorithm

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Xing Zhang

With the development of network and multimedia technology, multimedia communication has attracted the attention of researchers. Image encryption has become an urgent need for secure multimedia communication. Compared with the traditional encryption system, encryption algorithms based on chaos are easier to implement, which makes them more suitable for large-scale data encryption. The calculation method of image encryption proposed in this paper is a combination of high-dimensional chaotic systems. This algorithm is mainly used for graph mapping and used the Lorenz system to expand and replace them one by one. Studies have shown that this calculation method causes mixed pixel values, good diffusion performance, and strong key performance with strong resistance. The pixel of the encrypted picture is distributed relatively random, and the characteristics of similar loudness are not relevant. It is proved through experiments that the above calculation methods have strong safety performance.

2019 ◽  
Vol 15 (3) ◽  
pp. 64-78
Author(s):  
Chandrakala D ◽  
Sumathi S ◽  
Saran Kumar A ◽  
Sathish J

Detection and realization of new trends from corpus are achieved through Emergent Trend Detection (ETD) methods, which is a principal application of text mining. This article discusses the influence of the Particle Swarm Optimization (PSO) on Dynamic Adaptive Self Organizing Maps (DASOM) in the design of an efficient ETD scheme by optimizing the neural parameters of the network. This hybrid machine learning scheme is designed to accomplish maximum accuracy with minimum computational time. The efficiency and scalability of the proposed scheme is analyzed and compared with standard algorithms such as SOM, DASOM and Linear Regression analysis. The system is trained and tested on DBLP database, University of Trier, Germany. The superiority of hybrid DASOM algorithm over the well-known algorithms in handling high dimensional large-scale data to detect emergent trends from the corpus is established in this article.


Author(s):  
Pasi Luukka ◽  
◽  
Jouni Sampo

We have compared the differential evolution and genetic algorithms in a study of weight optimization for different similarity measures in a task of classification. In a study of high dimensional data weighting similarity measures become of great importance and efforts to study suitable optimizers is needed. In this article we have studied proper weighting of similarity measures in the classification of high dimensional and large scale data. We will show that in most cases the differential evolution algorithm should be used in finding the weights instead of the genetic algorithm.


Author(s):  
Zachary B Abrams ◽  
Caitlin E Coombes ◽  
Suli Li ◽  
Kevin R Coombes

Abstract Summary Unsupervised machine learning provides tools for researchers to uncover latent patterns in large-scale data, based on calculated distances between observations. Methods to visualize high-dimensional data based on these distances can elucidate subtypes and interactions within multi-dimensional and high-throughput data. However, researchers can select from a vast number of distance metrics and visualizations, each with their own strengths and weaknesses. The Mercator R package facilitates selection of a biologically meaningful distance from 10 metrics, together appropriate for binary, categorical and continuous data, and visualization with 5 standard and high-dimensional graphics tools. Mercator provides a user-friendly pipeline for informaticians or biologists to perform unsupervised analyses, from exploratory pattern recognition to production of publication-quality graphics. Availabilityand implementation Mercator is freely available at the Comprehensive R Archive Network (https://cran.r-project.org/web/packages/Mercator/index.html).


2016 ◽  
Vol 29 (6) ◽  
pp. 1061-1075
Author(s):  
Eun-Kyung Lee ◽  
Nayoung Hwang ◽  
Yoondong Lee

2016 ◽  
Author(s):  
G.V. Roshchupkin ◽  
H.H.H. Adams ◽  
M.W. Vernooij ◽  
A. Hofman ◽  
C.M. Van Duijn ◽  
...  

ABSTRACTLarge-scale data collection and processing have facilitated scientific discoveries in fields such as genomics and imaging, but cross-investigations between multiple big datasets remain impractical. Computational requirements of high-dimensional association studies are often too demanding for individual sites. Additionally, the sheer size of intermediate results is unfit for collaborative settings where summary statistics are exchanged for meta-analyses. Here we introduce the HASE framework to perform high-dimensional association studies with dramatic reduction in both computational burden and storage requirements of intermediate results. We implemented a novel meta-analytical method that yields identical power as pooled analyses without the need of sharing individual participant data. The efficiency of the framework is illustrated by associating 9 million genetic variants with 1.5 million brain imaging voxels in three cohorts (total N=4,034) followed by meta-analysis, on a standard computational infrastructure. These experiments indicate that HASE facilitates high-dimensional association studies enabling large multicenter association studies for future discoveries.


2018 ◽  
Vol 7 (8) ◽  
pp. 313 ◽  
Author(s):  
Meng Lu ◽  
Marius Appel ◽  
Edzer Pebesma

Geographic data is growing in size and variety, which calls for big data management tools and analysis methods. To efficiently integrate information from high dimensional data, this paper explicitly proposes array-based modeling. A large portion of Earth observations and model simulations are naturally arrays once digitalized. This paper discusses the challenges in using arrays such as the discretization of continuous spatiotemporal phenomena, irregular dimensions, regridding, high-dimensional data analysis, and large-scale data management. We define categories and applications of typical array operations, compare their implementation in open-source software, and demonstrate dimension reduction and array regridding in study cases using Landsat and MODIS imagery. It turns out that arrays are a convenient data structure for representing and analysing many spatiotemporal phenomena. Although the array model simplifies data organization, array properties like the meaning of grid cell values are rarely being made explicit in practice.


Author(s):  
Cheng Chen ◽  
Luo Luo ◽  
Weinan Zhang ◽  
Yong Yu ◽  
Yijiang Lian

The linear contextual bandits is a sequential decision-making problem where an agent decides among sequential actions given their corresponding contexts. Since large-scale data sets become more and more common, we study the linear contextual bandits in high-dimensional situations. Recent works focus on employing matrix sketching methods to accelerating contextual bandits. However, the matrix approximation error will bring additional terms to the regret bound. In this paper we first propose a novel matrix sketching method which is called Spectral Compensation Frequent Directions (SCFD). Then we propose an efficient approach for contextual bandits by adopting SCFD to approximate the covariance matrices. By maintaining and manipulating sketched matrices, our method only needs O(md) space and O(md) updating time in each round, where d is the dimensionality of the data and m is the sketching size. Theoretical analysis reveals that our method has better regret bounds than previous methods in high-dimensional cases. Experimental results demonstrate the effectiveness of our algorithm and verify our theoretical guarantees.


Sign in / Sign up

Export Citation Format

Share Document