image objects
Recently Published Documents


TOTAL DOCUMENTS

197
(FIVE YEARS 60)

H-INDEX

13
(FIVE YEARS 3)

2022 ◽  
Vol 54 (8) ◽  
pp. 1-36
Author(s):  
Xingwei Zhang ◽  
Xiaolong Zheng ◽  
Wenji Mao

Deep neural networks (DNNs) have been verified to be easily attacked by well-designed adversarial perturbations. Image objects with small perturbations that are imperceptible to human eyes can induce DNN-based image class classifiers towards making erroneous predictions with high probability. Adversarial perturbations can also fool real-world machine learning systems and transfer between different architectures and datasets. Recently, defense methods against adversarial perturbations have become a hot topic and attracted much attention. A large number of works have been put forward to defend against adversarial perturbations, enhancing DNN robustness against potential attacks, or interpreting the origin of adversarial perturbations. In this article, we provide a comprehensive survey on classical and state-of-the-art defense methods by illuminating their main concepts, in-depth algorithms, and fundamental hypotheses regarding the origin of adversarial perturbations. In addition, we further discuss potential directions of this domain for future researchers.


2021 ◽  
Author(s):  
Athanasios Giannikis ◽  
Efthimios Alepis ◽  
Maria Virvou

2021 ◽  
Vol 38 (5) ◽  
pp. 1353-1360
Author(s):  
Fengyun Cao

Based on multi-feature fusion, this paper introduces a novel depth estimation method to suppress defocus and motion blurs, as well as focal plane ambiguity. Firstly, the node features formed by occlusion were fused to optimize image segmentation, and obtain the position relations between image objects. Next, the Gaussian gradient ratio between the defocused input image and the quadratic Gaussian blur was calculated to derive the edge sparse blur. After that, the fast guided filter was adopted to diffuse the sparse blur globally, and estimate the relative depth of the scene. Experimental results demonstrate that our method excellently resolves the ambiguity of depth estimation, and accurately overcomes the noise problem in real-time.


2021 ◽  
Vol 23 (1) ◽  
Author(s):  
Xiangning Chen ◽  
Daniel G. Chen ◽  
Zhongming Zhao ◽  
Justin M. Balko ◽  
Jingchun Chen

Abstract Background Transcriptome sequencing has been broadly available in clinical studies. However, it remains a challenge to utilize these data effectively for clinical applications due to the high dimension of the data and the highly correlated expression between individual genes. Methods We proposed a method to transform RNA sequencing data into artificial image objects (AIOs) and applied convolutional neural network (CNN) algorithms to classify these AIOs. With the AIO technique, we considered each gene as a pixel in an image and its expression level as pixel intensity. Using the GSE96058 (n = 2976), GSE81538 (n = 405), and GSE163882 (n = 222) datasets, we created AIOs for the subjects and designed CNN models to classify biomarker Ki67 and Nottingham histologic grade (NHG). Results With fivefold cross-validation, we accomplished a classification accuracy and AUC of 0.821 ± 0.023 and 0.891 ± 0.021 for Ki67 status. For NHG, the weighted average of categorical accuracy was 0.820 ± 0.012, and the weighted average of AUC was 0.931 ± 0.006. With GSE96058 as training data and GSE81538 as testing data, the accuracy and AUC for Ki67 were 0.826 ± 0.037 and 0.883 ± 0.016, and that for NHG were 0.764 ± 0.052 and 0.882 ± 0.012, respectively. These results were 10% better than the results reported in the original studies. For Ki67, the calls generated from our models had a better power for prediction of survival as compared to the calls from trained pathologists in survival analyses. Conclusions We demonstrated that RNA sequencing data could be transformed into AIOs and be used to classify Ki67 status and NHG with CNN algorithms. The AIO method could handle high-dimensional data with highly correlated variables, and there was no need for variable selection. With the AIO technique, a data-driven, consistent, and automation-ready model could be developed to classify biomarkers with RNA sequencing data and provide more efficient care for cancer patients.


2021 ◽  
Vol 13 (19) ◽  
pp. 3857
Author(s):  
Dongsheng Wei ◽  
Dongyang Hou ◽  
Xiaoguang Zhou ◽  
Jun Chen

Multi-temporal remote sensing images are the primary sources for change detection. However, it is difficult to obtain comparable multi-temporal images at the same season and time of day with the same sensor. Considering texture homogeneity among objects belonging to the same category, this paper presents a new change detection approach using a texture feature space outlier index from mono-temporal remote sensing images and vector data. In the proposed approach, a texture feature contribution index (TFCI) is defined based on information gain to select the optimal texture features, and a feature space outlier index (FSOI) based on local reachability density is presented to automatically identify outlier samples and changed objects. Our approach includes three steps: (1) the sampling method is designed considering spatial distribution and topographic properties of image objects extracted by segmenting the recent image with existing vector map. (2) Samples with changed categories are refined by an iteration procedure of texture feature selection and outlier sample elimination; and (3) the changed image objects are identified and classified using the refined samples to calculate the FSOI values of the image objects. Three experiments in the two study areas were conducted to validate its performance. Overall accuracies of 95.94%, 96.36%, and 96.28% were achieved, respectively, while the omission and commission errors for every category were all very low. Four widely used methods with two-temporal images were selected for comparison, and the accuracy of the proposed method is higher than theirs. This indicates that our approach is effective and feasible.


2021 ◽  
Vol 7 (8) ◽  
pp. 125
Author(s):  
Yan Gong ◽  
Georgina Cosma ◽  
Hui Fang

Visual-semantic embedding (VSE) networks create joint image–text representations to map images and texts in a shared embedding space to enable various information retrieval-related tasks, such as image–text retrieval, image captioning, and visual question answering. The most recent state-of-the-art VSE-based networks are: VSE++, SCAN, VSRN, and UNITER. This study evaluates the performance of those VSE networks for the task of image-to-text retrieval and identifies and analyses their strengths and limitations to guide future research on the topic. The experimental results on Flickr30K revealed that the pre-trained network, UNITER, achieved 61.5% on average Recall@5 for the task of retrieving all relevant descriptions. The traditional networks, VSRN, SCAN, and VSE++, achieved 50.3%, 47.1%, and 29.4% on average Recall@5, respectively, for the same task. An additional analysis was performed on image–text pairs from the top 25 worst-performing classes using a subset of the Flickr30K-based dataset to identify the limitations of the performance of the best-performing models, VSRN and UNITER. These limitations are discussed from the perspective of image scenes, image objects, image semantics, and basic functions of neural networks. This paper discusses the strengths and limitations of VSE networks to guide further research into the topic of using VSE networks for cross-modal information retrieval tasks.


2021 ◽  
Vol 45 (4) ◽  
pp. 562-574
Author(s):  
A.A. Egorova ◽  
V.V. Sergeyev

Superpixel-based image processing and analysis methods usually use a small set of superpixel features. Expanding the description of superpixels can improve the quality of processing algorithms. In the paper, a set of 25 basic superpixel features of shape, intensity, geometry, and location is proposed. The features meet the requirements of low computational complexity in the process of image superpixel segmentation and sufficiency for solving a wide class of application tasks. Applying the set, we present a modification of the well-known approach to the superpixel generation. It consists of fast primary superpixel segmentation of the image with a strict homogeneity predicate, which provides superpixels preserving the intensity information of the original image with high accuracy, and the subsequent enlargement of the superpixels with softer homogeneity predicates. The experiments show that the approach can significantly reduce the number of image elements, which helps to reduce the complexity of processing algorithms, meanwhile the expanded superpixels more accurately correspond to the image objects.


Author(s):  
G. Yu ◽  
X. Zhou ◽  
D. Hou ◽  
D. Wei

Abstract. Quality is the key issue for judging the usability of crowdsourcing geographic data. While due to the un-professional of volunteers and the phenomenon of malicious labeling, there are many abnormal or poor quality objects in crowdsourced data. Based on this observation, an abnormal crowdsourced data detection method is proposed in this paper based on image features. This approach includes three main steps. 1) the crowdsourced vector data is used to segment the corresponding remote sensing imagery to get image objects with a priori information (e.g., shape and category) from vector data and spectral information from the images. Then, the sampling method is designed considering the spatial distribution and topographic properties of the objects, and the initial samples are obtained, although some samples are abnormal object or poor quality. 2) A feature contribution index (FCI) is defined based on information gain to select the optimal features, a feature space outlier index (FSOI) is presented to automatically identify outlier samples and changed objects. The initial samples are refined by an iteration procedure. After the iteration, the optimal features can be determined, and the refined samples with categories can be obtained; the imagery feature space is established using the optimal features for each category. 3) The abnormal objects are identified with the refined samples by calculating the FSOI values of image objects. In order to valid the effectiveness, an abnormal crowdsourced data detection prototype is developed using Visual Studio 2013 and C# programming, the above algorithms and methods are implemented and verified using water and vegetation categories as example, the OSM (OpenStreetMap) and corresponding imagery data of Changsha city as experiment data. The angular second moment (ASM), contrast, inverse difference moment (IDM), mean, variance, difference entropy, and normalized difference green index (NDGI) of vegetation, and the IDM, difference entropy and correlation and maximum band value of water are used to detect abnormal data after the selection of image optimal feature. Experimental results show that abnormal water and vegetation data in OSM can be effectively detected in this method, and the missed detection rate of the vegetation and water are all near to zero, and the positive detection rate reach 90.4% and 83.8%, respectively.


2021 ◽  
Author(s):  
Paul L. Pegnato

Using a set of photographic print rolls containing over 4,000 image frames of the Martian landscape produced by NASA from images obtained by cameras mounted on the Viking Lander I and II spacecraft in 1976, and a companion CD-ROM set containing replicates of Viking’s visual data, this thesis will explore the photographic technologies and the image processing procedures used to create NASA image products for scientific research. It will also examine the relationship between the photographic rolls and the original digital entities on the CD-ROMs and explore why such science-based photographic objects should be collected by a museum of photography.


Sign in / Sign up

Export Citation Format

Share Document