scholarly journals Cooperative Cloud-Edge Feature Extraction Architecture for Mobile Image Retrieval

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Chao He ◽  
Gang Ma

Mobile image retrieval greatly facilitates our lives and works by providing various retrieval services. The existing mobile image retrieval scheme is based on mobile cloud-edge computing architecture. That is, user equipment captures images and uploads the captured image data to the edge server. After preprocessing these captured image data and extracting features from these image data, the edge server uploads the extracted features to the cloud server. However, the feature extraction on the cloud server is noncooperative with the feature extraction on the edge server which cannot extract features effectively and has a lower image retrieval accuracy. For this, we propose a collaborative cloud-edge feature extraction architecture for mobile image retrieval. The cloud server generates the projection matrix from the image data set with a feature extraction algorithm, and the edge server extracts the feature from the uploaded image with the projection matrix. That is, the cloud server guides the edge server to perform feature extraction. This architecture can effectively extract the image data on the edge server, reduce network load, and save bandwidth. The experimental results indicate that this scheme can upload few features to get high retrieval accuracy and reduce the feature matching time by about 69.5% with similar retrieval accuracy.

Author(s):  
Noureddine Abbadeni

This chapter describes an approach based on human perception to content-based image representation and retrieval. We consider textured images and propose to model the textural content of images by a set of features having a perceptual meaning and their application to content-based image retrieval. We present a new method to estimate a set of perceptual textural features, namely coarseness, directionality, contrast and busyness. The proposed computational measures are based on two representations: the original images representation and the autocovariance function (associated with images) representation. The correspondence of the proposed computational measures to human judgments is shown using a psychometric method based on the Spearman rank-correlation coefficient. The set of computational measures is applied to content-based image retrieval on a large image data set, the well-known Brodatz database. Experimental results show a strong correlation between the proposed computational textural measures and human perceptual judgments. The benchmarking of retrieval performance, done using the recall measure, shows interesting results. Furthermore, results merging/fusion returned by each of the two representations is shown to allow significant improvement in retrieval effectiveness.


2019 ◽  
Vol 8 (3) ◽  
pp. 8881-8884

These are the days where we are very rich in information and poor in data. This is very true in case of image data. Whether it is the case of normal images or satellite images, the image collection is very huge but utilizing those images is of least concern. Extracting features from big images is a very challenging and compute intensive task but if we realize it, it will be very fruitful. CBIR (Content Based Image Retrieval) when used with HRRS (High Resolution Remote Sensing) images will yield with effective data.


Kursor ◽  
2018 ◽  
Vol 9 (2) ◽  
Author(s):  
Hendro Nugroho ◽  
Eka Prakarsa Mandyartha

In the findings of the statue of Ganesha in Trowulan Mojokerto area is no longer intact, because the statue of Ganesha is found to have been on the surface of soil or underground, so the archaeologist is very difficult to categorize the findings. This research proposes to overcome the above problems it is necessary to the Image Retrieval system (image retrieval) that can provide information about the results of the discovery of such historic objects. For the image taken as Image Retrieval as an example of research trials is the Ganesha Arca. From the Ganesha Statue is searched for feature extraction value by using Moment Invariant method, after which to get the result of image retrieval using Manhattan method. Image Retrieval information system work is image of Ganesa Arca in pre-processing with size 200x260 pixel BMP, then image in edge detection using Roberts method and extraction of Moment Invariant feature and inserted into database as data traning. For data testing the same process with data traning so searched the closest distance using Manhattan method. From the results of 15 image testing statues Ganesha level to the accuracy of the true states there is 62% and stated wrong 38%. Research can be further developed using various methods to improve image retrieval accuracy


Author(s):  
Vinayak Majhi ◽  
Sudip Paul

Content-based image retrieval is a promising technique to access visual data. With the huge development of computer storage, networking, and the transmission technology now it becomes possible to retrieve the image data beside the text. In the traditional way, we find the content of image by the tagged image with some indexed text. With the development of machine learning technique in the domain of artificial intelligence, the feature extraction techniques become easier for CBIR. The medical images are continuously increasing day by day where each image holds some specific and unique information about some specific disease. The objectives of using CBIR in medical diagnosis are to provide correct and effective information to the specialist for the quality and efficient diagnosis of the disease. Medical image content requires different types of CBIR technique for different medical image acquisition techniques such as MRI, CT, PET Scan, USG, MRS, etc. So, in this concern, each CBIR technique has its unique feature extraction algorithm for each acquisition technique.


Author(s):  
Tatyana Biloborodova ◽  
Inna Skarga-Bandurova ◽  
Mark Koverga

The methodology of solving the problem of eliminating class imbalance in image data sets is presented. The proposed methodology includes the stages of image fragment extraction, fragment augmentation, feature extraction, duplication of minority objects, and is based on reinforcement learning technology. The degree of imbalance indicator was used as a measure to determine the imbalance of the data set. An experiment was performed using a set of images of the faces of patients with skin rashes, annotated according to the severity of acne. The main steps of the methodology implementation are considered. The results of the classification showed the feasibility of applying the proposed methodology. The accuracy of classification on test data was 85%, which is 5% higher than the result obtained without the use of the proposed methodology. Key words: class imbalance, unbalanced data set, image fragment extraction, augmentation.


2013 ◽  
Vol 333-335 ◽  
pp. 822-827 ◽  
Author(s):  
Jun Chul Chun ◽  
Wong Gi Kim

It is known that wavelet transform provides very useful feature values in analyzing various types of images. This paper presents a novel approach for content-based textile image retrieval which uses composite feature vectors of low-level color feature from spatial domain and second-order statistic features from wavelet-transformed sub-band coefficients. Even though color histogram itself is efficient and most used signature for CBIR, it is unable to carry local spatial information of pixel and generate inaccurate retrieval results especially in large image data set. In this paper, we extract texture features such as contrast, homogeneity, ASM(angular-second momentum) and entropy from decomposed sub-band images by wavelet transform and utilize these multiple feature vector to retrieve textile images combining with color histogram. From the experimental results it is proven that the proposed approach is efficiently retrieve the desired images from a large set of textile image database.


2021 ◽  
Vol 12 (1) ◽  
pp. 77-94
Author(s):  
Yanxia Jin ◽  
Xin Zhang ◽  
Yao Jia

In image retrieval, the major challenge is that the number of images in the gallery is large and irregular, which results in low retrieval accuracy. This paper analyzes the disadvantages of the PAM (partitioning around medoid) clustering algorithm in image data classification and the excessive consumption of time in the computation process of searching clustering representative objects using the PAM clustering algorithm. Fireworks particle swarm algorithm is utilized in the optimization process. PF-PAM algorithm, which is an improved PAM algorithm, is proposed and applied in image retrieval. First, extract the feature vector of the image in the gallery for the first clustering. Next, according to the clustering results, the most optimal cluster center is searched through the firework particle swarm algorithm to obtain the final clustering result. Finally, according to the incoming query image, determine the related image category and get similar images. The experimental comparison with other approaches shows that this method can effectively improve retrieval accuracy.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2020 ◽  
Vol 33 (6) ◽  
pp. 838-844
Author(s):  
Jan-Helge Klingler ◽  
Ulrich Hubbe ◽  
Christoph Scholz ◽  
Florian Volz ◽  
Marc Hohenhaus ◽  
...  

OBJECTIVEIntraoperative 3D imaging and navigation is increasingly used for minimally invasive spine surgery. A novel, noninvasive patient tracker that is adhered as a mask on the skin for 3D navigation necessitates a larger intraoperative 3D image set for appropriate referencing. This enlarged 3D image data set can be acquired by a state-of-the-art 3D C-arm device that is equipped with a large flat-panel detector. However, the presumably associated higher radiation exposure to the patient has essentially not yet been investigated and is therefore the objective of this study.METHODSPatients were retrospectively included if a thoracolumbar 3D scan was performed intraoperatively between 2016 and 2019 using a 3D C-arm with a large 30 × 30–cm flat-panel detector (3D scan volume 4096 cm3) or a 3D C-arm with a smaller 20 × 20–cm flat-panel detector (3D scan volume 2097 cm3), and the dose area product was available for the 3D scan. Additionally, the fluoroscopy time and the number of fluoroscopic images per 3D scan, as well as the BMI of the patients, were recorded.RESULTSThe authors compared 62 intraoperative thoracolumbar 3D scans using the 3D C-arm with a large flat-panel detector and 12 3D scans using the 3D C-arm with a small flat-panel detector. Overall, the 3D C-arm with a large flat-panel detector required more fluoroscopic images per scan (mean 389.0 ± 8.4 vs 117.0 ± 4.6, p < 0.0001), leading to a significantly higher dose area product (mean 1028.6 ± 767.9 vs 457.1 ± 118.9 cGy × cm2, p = 0.0044).CONCLUSIONSThe novel, noninvasive patient tracker mask facilitates intraoperative 3D navigation while eliminating the need for an additional skin incision with detachment of the autochthonous muscles. However, the use of this patient tracker mask requires a larger intraoperative 3D image data set for accurate registration, resulting in a 2.25 times higher radiation exposure to the patient. The use of the patient tracker mask should thus be based on an individual decision, especially taking into considering the radiation exposure and extent of instrumentation.


2020 ◽  
Vol 27 (4) ◽  
pp. 313-320 ◽  
Author(s):  
Xuan Xiao ◽  
Wei-Jie Chen ◽  
Wang-Ren Qiu

Background: The information of quaternary structure attributes of proteins is very important because it is closely related to the biological functions of proteins. With the rapid development of new generation sequencing technology, we are facing a challenge: how to automatically identify the four-level attributes of new polypeptide chains according to their sequence information (i.e., whether they are formed as just as a monomer, or as a hetero-oligomer, or a homo-oligomer). Objective: In this article, our goal is to find a new way to represent protein sequences, thereby improving the prediction rate of protein quaternary structure. Methods: In this article, we developed a prediction system for protein quaternary structural type in which a protein sequence was expressed by combining the Pfam functional-domain and gene ontology. turn protein features into digital sequences, and complete the prediction of quaternary structure through specific machine learning algorithms and verification algorithm. Results: Our data set contains 5495 protein samples. Through the method provided in this paper, we classify proteins into monomer, or as a hetero-oligomer, or a homo-oligomer, and the prediction rate is 74.38%, which is 3.24% higher than that of previous studies. Through this new feature extraction method, we can further classify the four-level structure of proteins, and the results are also correspondingly improved. Conclusion: After the applying the new prediction system, compared with the previous results, we have successfully improved the prediction rate. We have reason to believe that the feature extraction method in this paper has better practicability and can be used as a reference for other protein classification problems.


Sign in / Sign up

Export Citation Format

Share Document