scholarly journals Data Security Protection Mechanism of Video and Image Feature Modeling Based on Domestic Crypto Algorithm

2021 ◽  
Vol 2137 (1) ◽  
pp. 012066
Author(s):  
Yueqiang Tu

Abstract Video and image monitoring is increasingly appearing in our home, travel and other aspects. Through video and image analysis and comparison of video and image monitoring data, it can provide strong analysis support capabilities for social security prevention and control, traffic command, safety production and so on. Video and image feature modeling is a necessary prerequisite for video and image analysis and comparison. Video and image feature modeling forms video and image feature data. Video and image feature data reflects the most essential information of various elements such as people, vehicles and objects. However, video and image data is easy to be damaged, changed and leaked in the process of collection, aggregation, analysis, modeling and storage, Facing data security risks. This paper proposes a set of video and image feature modeling data security protection mechanism based on domestic algorithm to realize the whole process encryption protection of data acquisition, transmission and storage.

2020 ◽  
Vol 39 (4) ◽  
pp. 5953-5964
Author(s):  
Lei Ning

The huge amount of digital image data in e-commerce transactions brings serious problems to the rapid retrieval and storage of images. Image hashing technology can convert image data of arbitrary resolution into a binary code sequence of tens or hundreds of bits through a hash function. In view of this, based on the image content characteristics, this study improved the traditional hash function and proposed a hash method based on bilateral random projection. At the same time, the projection vectors are acquired in the low-rank sparse decomposition process of the image data matrix, and the projection vectors are group orthogonalized. In addition, this study designed contrast test to carry out research and analysis on the effectiveness of the algorithm. The results show that the proposed algorithm works well and can be applied to practice and can provide theoretical reference for subsequent related research.


Author(s):  
Robert W. Mackin

This paper presents two advances towards the automated three-dimensional (3-D) analysis of thick and heavily-overlapped regions in cytological preparations such as cervical/vaginal smears. First, a high speed 3-D brightfield microscope has been developed, allowing the acquisition of image data at speeds approaching 30 optical slices per second. Second, algorithms have been developed to detect and segment nuclei in spite of the extremely high image variability and low contrast typical of such regions. The analysis of such regions is inherently a 3-D problem that cannot be solved reliably with conventional 2-D imaging and image analysis methods.High-Speed 3-D imaging of the specimen is accomplished by moving the specimen axially relative to the objective lens of a standard microscope (Zeiss) at a speed of 30 steps per second, where the stepsize is adjustable from 0.2 - 5μm. The specimen is mounted on a computer-controlled, piezoelectric microstage (Burleigh PZS-100, 68/μm displacement). At each step, an optical slice is acquired using a CCD camera (SONY XC-11/71 IP, Dalsa CA-D1-0256, and CA-D2-0512 have been used) connected to a 4-node array processor system based on the Intel i860 chip.


2011 ◽  
Vol 22 (No. 4) ◽  
pp. 133-142 ◽  
Author(s):  
I. Švec ◽  
M. Hrušková

Abstract: Baking quality of flour from six wheat cultivars (harvest 2002 and 2003), belonging to the quality classes A and B, was evaluated using the fermented dough test. Analytical traits of kernel and flour showed differences between the classes which were confirmed by the baking test with the full-bread-formula according to Czech method. In addition to standard methods of the bread parameters description (specific bread volume and bread shape measurements) rheological measurements of penetrometer and image analysis were used in effort to differentiate wheat samples into the quality classes. The results of the baking test proved significant differences in specific bread volumes – the highest volume in class A was obtained with the cultivar Vinjet and in class B with SG-S1098 – approx. 410 and 420 ml/100 g. Although significant correlations among image analysis data and specific bread volume having been proved, any image analysis parameter did not distinguish the quality classes. Only the penetronetric measurements made with bread crumb were suitable for such purpose (r = 0.9083; for  = 0.01). Among image analysis data the total cell area of the crumb had the strongest correlation with specific bread volume (r = 0.7840; for α = 0.01).    


1993 ◽  
Vol 20 (2) ◽  
pp. 228-235 ◽  
Author(s):  
Yean-Jye Lu ◽  
Xidong Yuan

Image analysis for traffic data collection has been studied throughout the world for more than a decade. A survey of existing systems shows that research was focused mainly on the monochrome image analysis and that the field of color image analysis was rarely studied. With the application of color image analysis in mind, this paper proposes a new algorithm for vehicle speed measurement in daytime. The new algorithm consists of four steps: (i) image input, (ii) pixel analysis, (iii) single image analysis, and (iv) image sequence analysis. It has three significant advantages. First, the algorithm can distinguish the shadows caused by moving vehicles outside the detection area from the actual vehicles passing through the area, which is a difficult problem for the monochrome image analysis technique to handle. Second, the algorithm significantly reduces the image data to be processed; thus only a personal computer is required without the addition of any special hardware. The third advantage is the flexible placement of detection spots at any position in the camera's field of view. The accuracy of the algorithm is also discussed. Key words: speed measurement, vehicle detection, image analysis, image processing, traffic control, traffic measurement and road traffic.


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e4088 ◽  
Author(s):  
Malia A. Gehan ◽  
Noah Fahlgren ◽  
Arash Abbasi ◽  
Jeffrey C. Berry ◽  
Steven T. Callen ◽  
...  

Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.


2020 ◽  
Author(s):  
Rostislav Kouznetsov

Abstract. Lossy compression of scientific data arrays is a powerful tool to save network bandwidth and storage space. Properly applied lossy compression can reduce the size of a dataset by orders of magnitude keeping all essential information, whereas a wrong choice of lossy compression parameters leads to the loss of valuable data. The paper considers statistical properties of several lossy compression methods implemented in "NetCDF operators" (NCO), a popular tool for handling and transformation of numerical data in NetCDF format. We compare the effects of imprecisions and artifacts resulting from use of a lossy compression of floating-point data arrays. In particular, we show that a popular Bit Grooming algorithm (default in NCO) has sub-optimal accuracy and produces substantial artifacts in multipoint statistics. We suggest a simple implementation of two algorithms that are free from these artifacts and have twice higher precision. Besides that, we suggest a way to rectify the data already processed with Bit Grooming. The algorithm has been contributed to NCO mainstream. The supplementary material contains the implementation of the algorithm in Python 3.


2016 ◽  
Vol 3 (2) ◽  
pp. 189-196
Author(s):  
Budi Hartono ◽  
Veronica Lusiana

Searching image is based on the image content, which is often called with searching of image object. If the image data has similarity object with query image then it is expected the searching process can recognize it. The position of the image object that contains an object, which is similar to the query image, is possible can be found at any positionon image data so that will become main attention or the region of interest (ROI). This image object can has different wide image, which is wider or smaller than the object on the query image. This research uses two kinds of image data sizes that are in size of 512X512 and in size of 256X256 pixels.Through experimental result is obtained that preparing model of multilevel sub-image and resize that has same size with query image that is in size of 128X128 pixels can help to find ROI position on image data. In order to find the image data that is similar to the query image then it is done by calculating Euclidean distance between query image feature and image data feature.


Author(s):  
Jane You ◽  
Qin Li ◽  
Jinghua Wang

This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing. It also provides an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features. Experimental results confirm that the new approach is feasible for content-based image retrieval.


Sign in / Sign up

Export Citation Format

Share Document