image pixels
Recently Published Documents


TOTAL DOCUMENTS

335
(FIVE YEARS 139)

H-INDEX

16
(FIVE YEARS 4)

Author(s):  
Hayder Mazin Makki Alibraheemi ◽  
Qais Al-Gayem ◽  
Ehab AbdulRazzaq Hussein

<span>This paper presents the design and simulation of a hyperchaotic communication system based on four dimensions (4D) Lorenz generator. The synchronization technique that used between the master/transmitter and the slave/receiver is based on dynamic feedback modulation technique (DFM). The mismatch error between the master dynamics and slave dynamics are calculated continuously to maintain the sync process. The information signal (binary image) is masked (encrypted) by the hyperchaotic sample x of Lorenz generator. The design and simulation of the overall system are carried out using MATLAB Simulink software. The simulation results prove that the system is suitable for securing the plain-data, in particular the image data with a size of 128×128 pixels within 0.1 second required for encryption, and decryption in the presence of the channel noise. The decryption results for gray and colored images show that the system can accurately decipher the ciphered image, but with low level distortion in the image pixels due to the channel noise. These results make the proposed cryptosystem suitable for real time secure communications.</span>


Author(s):  
Huda Kadhim Tayyeh ◽  
Ahmed Sabah Ahmed AL-Jumaili

Steganography is one of the cryptography techniques where secret information can be hidden through multimedia files such as images and videos. Steganography can offer a way of exchanging secret and encrypted information in an untypical mechanism where communicating parties can only interpret the secret message. The literature has shown a great interest in the least significant bit (LSB) technique which aims at embedding the secret message bits into the most insignificant bits of the image pixels. Although LSB showed a stable performance of image steganography yet, many works should be done on the message part. This paper aims to propose a combination of LSB and Deflate compression algorithm for image steganography. The proposed Deflate algorithm utilized both LZ77 and Huffman coding. After compressing the message text, LSB has been applied to embed the text within the cover image. Using benchmark images, the proposed method demonstrated an outperformance over the state of the art. This can proof the efficacy of using Deflate as a data compression prior to the LSB embedding.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Siyi Jia ◽  
Heng Chen

In the cross-media image reproduction technology, the accurate transfer and reproduction of colour between different media are an important issue in the reproduction process, and the colour mapping technology is the key technology to effectively maintain the image details and improve the level of colour reproduction. Wooden structure in the image colour and colour piece is different, the image of each colour of visual perception is not independent, and every colour in the image pixels is affected by the surrounding pixels, but in the process of image map, without thinking of the pixel space, adjacent pixels of mutual influence in particular, do not let a person particularly be satisfied with the resulting map figure. In the process of image processing by traditional colour mapping algorithm, the colour distortion caused by colour component is ignored and the block diagram of colour mapping system is constructed. With the continuous development of mapping recognition algorithms, the maximum and minimum brightness values in the image are mapped to the maximum and minimum brightness values of the display device by linear mapping algorithm according to the flow of the established recognition algorithm. By establishing the colour adjustment method of the colour mapping image, the processing effect of the mapping algorithm is analysed. The results show that the brightness deviation of the image is reduced and the colour resolution is improved by the colour brightness compensation.


2022 ◽  
Vol 8 (1) ◽  
pp. 9
Author(s):  
Bruno Sauvalle ◽  
Arnaud de La Fortelle

The goal of background reconstruction is to recover the background image of a scene from a sequence of frames showing this scene cluttered by various moving objects. This task is fundamental in image analysis, and is generally the first step before more advanced processing, but difficult because there is no formal definition of what should be considered as background or foreground and the results may be severely impacted by various challenges such as illumination changes, intermittent object motions, highly cluttered scenes, etc. We propose in this paper a new iterative algorithm for background reconstruction, where the current estimate of the background is used to guess which image pixels are background pixels and a new background estimation is performed using those pixels only. We then show that the proposed algorithm, which uses stochastic gradient descent for improved regularization, is more accurate than the state of the art on the challenging SBMnet dataset, especially for short videos with low frame rates, and is also fast, reaching an average of 52 fps on this dataset when parameterized for maximal accuracy using acceleration with a graphics processing unit (GPU) and a Python implementation.


2022 ◽  
Author(s):  
Afzal Rahman ◽  
Haider Ali ◽  
Noor Badshah ◽  
Muhammad Zakarya ◽  
Hameed Hussain ◽  
...  

Abstract In image segmentation and in general in image processing, noise and outliers distort contained information posing in this way a great challenge for accurate image segmentation results. To ensure a correct image segmentation in presence of noise and outliers, it is necessary to identify the outliers and isolate them during a denoising pre-processing or impose suitable constraints into a segmentation framework. In this paper, we impose suitable removing outliers constraints supported by a well-designed theory in a variational framework for accurate image segmentation. We investigate a novel approach based on the power mean function equipped with a well established theoretical base. The power mean function has the capability to distinguishes between true image pixels and outliers and, therefore, is robust against outliers. To deploy the novel image data term and to guaranteed unique segmentation results, a fuzzy-membership function is employed in the proposed energy functional. Based on qualitative and quantitative extensive analysis on various standard data sets, it has been observed that the proposed model works well in images having multi-objects with high noise and in images with intensity inhomogeneity in contrast with the latest and state of the art models.


Author(s):  
Aliaa Sadoon Abd ◽  
Ehab Abdul Razzaq Hussein

Cryptography and steganography are among the most important sciences that have been properly used to keep confidential data from potential spies and hackers. They can be used separately or together. Encryption involves the basic principle of instantaneous conversion of valuable information into a specific form that unauthorized persons will not understand to decrypt it. While steganography is the science of embedding confidential data inside a cover, in a way that cannot be recognized or seen by the human eye. This paper presents a high-resolution chaotic approach applied to images that hide information. A more secure and reliable system is designed to properly include confidential data transmitted through transmission channels. This is done by working the use of encryption and steganography together. This work proposed a new method that achieves a very high level of hidden information based on non-uniform systems by generating a random index vector (RIV) for hidden data within least significant bit (LSB) image pixels. This method prevents the reduction of image quality. The simulation results also show that the peak signal to noise ratio (PSNR) is up to 74.87 dB and the mean square error (MSE) values is up to 0.0828, which sufficiently indicates the effectiveness of the proposed algorithm.


2021 ◽  
Vol 38 (6) ◽  
pp. 1837-1842
Author(s):  
Makineni Siddardha Kumar ◽  
Kasukurthi Venkata Rao ◽  
Gona Anil Kumar

Lung tumor is a dangerous disease with the most noteworthy effects and causing more deaths around the world. Medical diagnosis of lung tumor growth can essentially lessen the death rate, on the grounds that powerful treatment alternatives firmly rely upon the particular phase of disease. Medical diagnosis considers to the use of innovation in science with the end goal of analyzing the interior structure of the organs of the human body. It is an approach to improve the nature of the patient's life through a progressively exact and fast detection, and with restricted symptoms, prompting a powerful generally treatment methodology. The main goal of the proposed work is to design a Lung Tumor Detection Model using Convolution Neural Networks (LTD-CNN) with machine learning technique that spread both miniaturized scale and full scale image surfaces experienced in Magnetic Resonance Imaging (MRI) and advanced microscopy modalities separately. Image pixels can give critical data on the abnormality of tissue and performs classification for accurate tumor detection. The advancement of Computer-Aided Diagnosing (CAD) helps the doctors and radiologists to analyze the lung disease precisely from CT images in its beginning phase. Different methods are accessible for the lung disease recognition, however numerous methodologies give not so much exactness but rather more fake positives. The proposed method is compared with the traditional models and the results exhibit that the proposed model detects the tumor effectively and more accurately.


2021 ◽  
Vol 9 ◽  
Author(s):  
Pasky Pascual ◽  
Cam Pascual

Hotspots of endemic biodiversity, tropical cloud forests teem with ecosystem services such as drinking water, food, building materials, and carbon sequestration. Unfortunately, already threatened by climate change, the cloud forests in our study area are being further endangered during the Covid pandemic. These forests in northern Ecuador are being razed by city dwellers building country homes to escape the Covid virus, as well as by illegal miners desperate for money. Between August 2019 and July 2021, our study area of 52 square kilometers lost 1.17% of its tree cover. We base this estimate on simulations from the predictive model we built using Artificial Intelligence, satellite images, and cloud technology. When simulating tree cover, this model achieved an accuracy between 96 and 100 percent. To train the model, we developed a visual and interactive application to rapidly annotate satellite image pixels with land use and land cover classes. We codified our algorithms in an R package—loRax—that researchers, environmental organizations, and governmental agencies can readily deploy to monitor forest loss all over the world.


2021 ◽  
pp. 44-54
Author(s):  
O. V Vorobiev ◽  
E. V Semenova ◽  
D. A Mukhin ◽  
E. O Statsenko ◽  
T. V Baltina ◽  
...  

The article presents one of the possible approaches to modeling objects with anisotropic properties based on images of the study area. Data from such images are taken into account when building a numerical model. In this case, material inhomogeneity can be included by integrating the local stiffness matrix of each finite element with a certain weight function. The purpose of the presented work is to develop a finite element for the formation of a computational ensemble and simulation of mechanical behavior taking into account the data of two-dimensional medical images. To implement the proposed approach, we used the assumption that there is a correlation between the values in the image pixels and the elastic properties of the material. Meshing was based on a four-node plane finite element. This approach allows using the quantitative phase or scanning electronic images, as well as computed tomography data. A number of test problems for compression of elementary geometry samples were calculated. The distal part of the rat femur was considered as a model problem. A computed tomography scan of the sample was used to construct a numerical model taking into account the inhomogeneity of the material distribution inside the organ. The distribution field of the nodal displacements based on data obtained from the images of the study area is presented. Within the framework of a model problem, we considered how a computer tomograph resolution influences the quality of the obtained results. For this purpose, calculations were carried out based on compressed input medical images.


Author(s):  
Ayman Elgharabawy ◽  
Mukesh Prasad ◽  
Chin-Teng Lin

Accuracy and computational cost are the main challenges of deep neural networks in image recognition. This paper proposes an efficient ranking reduction to binary classification approach using a new feed-forward network and feature selection based on ranking the image pixels. Preference net (PN) is a novel deep ranking learning approach based on Preference Neural Network (PNN), which uses new ranking objective function and positive smooth staircase (PSS) activation function to accelerate the image pixels&rsquo; ranking. PN has a new type of weighted kernel based on spearman ranking correlation instead of convolution to build the features matrix. The PN employs multiple kernels that have different sizes to partial rank image pixels&rsquo; in order to find the best features sequence. PN consists of multiple PNNs&rsquo; have shared output layer. Each ranker kernel has a separate PNN. The output results are converted to classification accuracy using the score function. PN has promising results comparing to the latest deep learning (DL) networks using the weighted average ensemble of each PN models for each kernel on CFAR-10 and Mnist-Fashion datasets in terms of accuracy and less computational cost.


Sign in / Sign up

Export Citation Format

Share Document