image recognition
Recently Published Documents


TOTAL DOCUMENTS

3864
(FIVE YEARS 1606)

H-INDEX

44
(FIVE YEARS 13)

2022 ◽  
Vol 16 (4) ◽  
pp. 1-21
Author(s):  
Honghui Xu ◽  
Zhipeng Cai ◽  
Wei Li

Multi-label image recognition has been an indispensable fundamental component for many real computer vision applications. However, a severe threat of privacy leakage in multi-label image recognition has been overlooked by existing studies. To fill this gap, two privacy-preserving models, Privacy-Preserving Multi-label Graph Convolutional Networks (P2-ML-GCN) and Robust P2-ML-GCN (RP2-ML-GCN), are developed in this article, where differential privacy mechanism is implemented on the model’s outputs so as to defend black-box attack and avoid large aggregated noise simultaneously. In particular, a regularization term is exploited in the loss function of RP2-ML-GCN to increase the model prediction accuracy and robustness. After that, a proper differential privacy mechanism is designed with the intention of decreasing the bias of loss function in P2-ML-GCN and increasing prediction accuracy. Besides, we analyze that a bounded global sensitivity can mitigate excessive noise’s side effect and obtain a performance improvement for multi-label image recognition in our models. Theoretical proof shows that our two models can guarantee differential privacy for model’s outputs, weights and input features while preserving model robustness. Finally, comprehensive experiments are conducted to validate the advantages of our proposed models, including the implementation of differential privacy on model’s outputs, the incorporation of regularization term into loss function, and the adoption of bounded global sensitivity for multi-label image recognition.


2022 ◽  
Vol 18 (1) ◽  
pp. 1-31
Author(s):  
Guohao Lan ◽  
Zida Liu ◽  
Yunfan Zhang ◽  
Tim Scargill ◽  
Jovan Stojkovic ◽  
...  

Mobile Augmented Reality (AR), which overlays digital content on the real-world scenes surrounding a user, is bringing immersive interactive experiences where the real and virtual worlds are tightly coupled. To enable seamless and precise AR experiences, an image recognition system that can accurately recognize the object in the camera view with low system latency is required. However, due to the pervasiveness and severity of image distortions, an effective and robust image recognition solution for “in the wild” mobile AR is still elusive. In this article, we present CollabAR, an edge-assisted system that provides distortion-tolerant image recognition for mobile AR with imperceptible system latency . CollabAR incorporates both distortion-tolerant and collaborative image recognition modules in its design. The former enables distortion-adaptive image recognition to improve the robustness against image distortions, while the latter exploits the spatial-temporal correlation among mobile AR users to improve recognition accuracy. Moreover, as it is difficult to collect a large-scale image distortion dataset, we propose a Cycle-Consistent Generative Adversarial Network-based data augmentation method to synthesize realistic image distortion. Our evaluation demonstrates that CollabAR achieves over 85% recognition accuracy for “in the wild” images with severe distortions, while reducing the end-to-end system latency to as low as 18.2 ms.


2022 ◽  
Vol 12 ◽  
Author(s):  
Wei Lu ◽  
Rongting Du ◽  
Pengshuai Niu ◽  
Guangnan Xing ◽  
Hui Luo ◽  
...  

Soybean yield is a highly complex trait determined by multiple factors such as genotype, environment, and their interactions. The earlier the prediction during the growing season the better. Accurate soybean yield prediction is important for germplasm innovation and planting environment factor improvement. But until now, soybean yield has been determined by weight measurement manually after soybean plant harvest which is time-consuming, has high cost and low precision. This paper proposed a soybean yield in-field prediction method based on bean pods and leaves image recognition using a deep learning algorithm combined with a generalized regression neural network (GRNN). A faster region-convolutional neural network (Faster R-CNN), feature pyramid network (FPN), single shot multibox detector (SSD), and You Only Look Once (YOLOv3) were employed for bean pods recognition in which recognition precision and speed were 86.2, 89.8, 80.1, 87.4%, and 13 frames per second (FPS), 7 FPS, 24 FPS, and 39 FPS, respectively. Therefore, YOLOv3 was selected considering both recognition precision and speed. For enhancing detection performance, YOLOv3 was improved by changing IoU loss function, using the anchor frame clustering algorithm, and utilizing the partial neural network structure with which recognition precision increased to 90.3%. In order to improve soybean yield prediction precision, leaves were identified and counted, moreover, pods were further classified as single, double, treble, four, and five seeds types by improved YOLOv3 because each type seed weight varies. In addition, soybean seed number prediction models of each soybean planter were built using PLSR, BP, and GRNN with the input of different type pod numbers and leaf numbers with which prediction results were 96.24, 96.97, and 97.5%, respectively. Finally, the soybean yield of each planter was obtained by accumulating the weight of all soybean pod types and the average accuracy was up to 97.43%. The results show that it is feasible to predict the soybean yield of plants in situ with high precision by fusing the number of leaves and different type soybean pods recognized by a deep neural network combined with GRNN which can speed up germplasm innovation and planting environmental factor optimization.


2022 ◽  
pp. 191-220
Author(s):  
Soonhoi Ha ◽  
EunJin Jeong ◽  
Duseok Kang ◽  
Jangryul Kim ◽  
Donghyun Kang

Agriculture ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 62
Author(s):  
Zhu Sun ◽  
Xiangyu Guo ◽  
Yang Xu ◽  
Songchao Zhang ◽  
Xiaohui Cheng ◽  
...  

To ensure the hybrid oilseed rape (OSR, Brassica napus) seed production, two important things are necessary, the stamen sterility on the female OSR plants and the effective pollen spread onto the pistil from the OSR male plants to the OSR female plants. The unmanned agricultural aerial system (UAAS) has developed rapidly in China. It has been used on supplementary pollination and aerial spraying during the hybrid OSR seed production. This study developed a new method to rapidly recognize the male OSR plants and extract the row center line for supporting the UAAS navigation. A male OSR plant recognition model was constructed based on the convolutional neural network (CNN). The sequence images of male OSR plants were extracted, the feature regions and points were obtained from the images through morphological and boundary process methods and horizontal segmentation, respectively. The male OSR plant image recognition accuracies of different CNN structures and segmentation sizes were discussed. The male OSR plant row center lines were fitted using the least-squares method (LSM) and Hough transform. The results showed that the segmentation algorithm could segment the male OSR plants from the complex background. The highest average recognition accuracy was 93.54%, and the minimum loss function value was 0.2059 with three convolutional layers, one fully connected layer, and a segmentation size of 40 pix × 40 pix. The LSM is better for center line fitting. The average recognition model accuracies of original input images were 98% and 94%, and the average root mean square errors (RMSE) of angle were 3.22° and 1.36° under cloudy day and sunny day lighting conditions, respectively. The results demonstrate the potential of using digital imaging technology to recognize the male OSR plant row for UAAS visual navigation on the applications of hybrid OSR supplementary pollination and aerial spraying, which would be a meaningful supplement in precision agriculture.


2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
Dianhai Wang ◽  
Lianmei Shen

Current image recognition methods cannot combine the transmission of image data with the interaction of image features, so the steps of image recognition are too independent, and the traditional methods take longer time and cannot complete the image denoising. Therefore, a recognition method of sports training action image based on software defined network (SDN) architecture is proposed. The SDN architecture is used to integrate the image data transmission and interactive process and to optimize the image processing centralization. The network architecture is composed of application layer, control layer, and infrastructure layer. Based on this, the dimension of image sample set is reduced, and the edge detection operator in any direction is constructed. The image edge filter is realized by calculating the response and threshold of image edge by using lag threshold and nonmaximum suppression (NMS). The Hough transform algorithm is improved to optimize the detection range. Extracting the neighborhood feature of sports training action, the recognition of sports training action image based on SDN architecture is completed. Simulation results show that the proposed method takes less time and the image denoising effect is better. In addition, the F1 test results of the proposed method are higher than those of the literature, and the convergence is better. Therefore, the performance of the proposed method is better.


2022 ◽  
Vol 30 (4) ◽  
pp. 174-184
Author(s):  
Tomohiko Takayama ◽  
Toshihisa Yashiro ◽  
Sachiyo Sanada ◽  
Tetsuo Katsuragi ◽  
Ryo Sugiura

Sign in / Sign up

Export Citation Format

Share Document