rgb image
Recently Published Documents


TOTAL DOCUMENTS

382
(FIVE YEARS 217)

H-INDEX

24
(FIVE YEARS 7)

2022 ◽  
Vol 192 ◽  
pp. 106617
Author(s):  
Yanchao Zhang ◽  
Wen Yang ◽  
Wenbo Zhang ◽  
Jiya Yu ◽  
Jianxin Zhang ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8440
Author(s):  
Fuyang Li ◽  
Zhiguo Wu ◽  
Jingyu Li ◽  
Zhitong Lai ◽  
Botong Zhao ◽  
...  

This paper presents a method for measuring aircraft landing gear angles based on a monocular camera and the CAD aircraft model. Condition monitoring of the aircraft landing gear is a prerequisite for the safe landing of the aircraft. Traditional manual observation has an intense subjectivity. In recent years, target detection models dependent on deep learning and pose estimation methods relying on a single RGB image have made significant progress. Based on these advanced algorithms, this paper proposes a method for measuring the actual angles of landing gears in two-dimensional images. A single RGB image of an aircraft is inputted to the target detection module to obtain the key points of landing gears. The vector field network votes the key points of the fuselage after extraction and scale normalization of the pixels inside the aircraft prediction box. Knowing the pixel position of the key points and the constraints on the aircraft, the angle between the landing gear and fuselage plane can be calculated even without depth information. The vector field loss function is improved based on the distance between pixels and key points, and synthetic datasets of aircraft with different angle landing gears are created to verify the validity of the proposed algorithm. The experimental results show that the mean error of the proposed algorithm for the landing gears is less than 5 degrees on the light-varying dataset.


2021 ◽  
Vol 13 (24) ◽  
pp. 5102
Author(s):  
Rui Yang ◽  
Xiangyu Lu ◽  
Jing Huang ◽  
Jun Zhou ◽  
Jie Jiao ◽  
...  

Disease and pest detection of grape foliage is essential for grape yield and quality. RGB image (RGBI), multispectral image (MSI), and thermal infrared image (TIRI) are widely used in the health detection of plants. In this study, we collected three types of grape foliage images with six common classes (anthracnose, downy mildew, leafhopper, mites, viral disease, and healthy) in the field. ShuffleNet V2 was used to build up detection models. According to the accuracy of RGBI, MSI, TIRI, and multi-source data concatenation (MDC) models, and a multi-source data fusion (MDF) decision-making method was proposed for improving the detection performance for grape foliage, aiming to enhance the decision-making for RGBI of grape foliage by fusing the MSI and TIRI. The results showed that 40% of the incorrect detection outputs were rectified using the MDF decision-making method. The overall accuracy of MDF model was 96.05%, which had improvements of 2.64%, 13.65%, and 27.79%, compared with the RGBI, MSI, and TIRI models using label smoothing, respectively. In addition, the MDF model was based on the lightweight network with 3.785 M total parameters and 0.362 G multiply-accumulate operations, which could be highly portable and easy to be applied.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8092
Author(s):  
Maomao Zhang ◽  
Ao Li ◽  
Honglei Liu ◽  
Minghui Wang

The analysis of hand–object poses from RGB images is important for understanding and imitating human behavior and acts as a key factor in various applications. In this paper, we propose a novel coarse-to-fine two-stage framework for hand–object pose estimation, which explicitly models hand–object relations in 3D pose refinement rather than in the process of converting 2D poses to 3D poses. Specifically, in the coarse stage, 2D heatmaps of hand and object keypoints are obtained from RGB image and subsequently fed into pose regressor to derive coarse 3D poses. As for the fine stage, an interaction-aware graph convolutional network called InterGCN is introduced to perform pose refinement by fully leveraging the hand–object relations in 3D context. One major challenge in 3D pose refinement lies in the fact that relations between hand and object change dynamically according to different HOI scenarios. In response to this issue, we leverage both general and interaction-specific relation graphs to significantly enhance the capacity of the network to cover variations of HOI scenarios for successful 3D pose refinement. Extensive experiments demonstrate state-of-the-art performance of our approach on benchmark hand–object datasets.


2021 ◽  
Author(s):  
Yassir Benhammou ◽  
Domingo Alcaraz-Segura ◽  
Emilio Guirado ◽  
Rohaifa Khaldi ◽  
Boujemâa Achchab ◽  
...  

ABSTRACTLand-Use and Land-Cover (LULC) mapping is relevant for many applications, from Earth system and climate modelling to territorial and urban planning. Global LULC products are continuously developing as remote sensing data and methods grow. However, there is still low consistency among LULC products due to low accuracy for some regions and LULC types. Here, we introduce Sentinel2GlobalLULC, a Sentinel-2 RGB image dataset, built from the consensus of 15 global LULC maps available in Google Earth Engine. Sentinel2GlobalLULC v1.1 contains 195572 RGB images organized into 29 global LULC mapping classes. Each image is a tile that has 224 × 224 pixels at 10 × 10 m spatial resolution and was built as a cloud-free composite from all Sentinel-2 images acquired between June 2015 and October 2020. Metadata includes a unique LULC type annotation per image, together with level of consensus, reverse geo-referencing, and global human modification index. Sentinel2GlobalLULC is optimized for the state-of-the-art Deep Learning models to provide a new gate towards building precise and robust global or regional LULC maps.


2021 ◽  
Vol 9 (2) ◽  
pp. 239
Author(s):  
Rudi Heriansyah ◽  
Wahyu Mulyo Utomo

Scilab is an open-source, cross-platform computational environment software available for academic and research purposes as a free of charge alternative to the matured computational copyrighted software such as MATLAB. One of important library available for Scilab is image processing toolbox dedicated solely for image and video processing. There are three major toolboxes for this purpose: Scilab image processing toolbox (SIP), Scilab image and video processing toolbox (SIVP) and recently image processing design toolbox (IPD). The target discussion in this paper is SIVP due to its vast use out there and its capability to handle streaming video file as well (note that IPD also supports video processing). Highlight on the difference between SIVP and IPD will also be discussed. From testing, it is found that in term of looping test, Octave and FreeMat are faster than Scilab. However, when converting RGB image to grayscale image, Scilab outperform Octave and FreeMat.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2813
Author(s):  
Yun Peng ◽  
Shengyi Zhao ◽  
Jizhan Liu

Accurately extracting the grape cluster at the front of overlapping grape clusters is the primary problem of the grape-harvesting robot. To solve the difficult problem of identifying and segmenting the overlapping grape clusters in the cultivation environment of a trellis, a simple method based on the deep learning network and the idea of region growing is proposed. Firstly, the region of grape in an RGB image was obtained by the finely trained DeepLabV3+ model. The idea of transfer learning was adopted when training the network with a limited number of training sets. Then, the corresponding region of the grape in the depth image captured by RealSense D435 was processed by the proposed depth region growing algorithm (DRG) to extract the front cluster. The depth region growing method uses the depth value instead of gray value to achieve clustering. Finally, it fils the holes in the clustered region of interest, extracts the contours, and maps the obtained contours to the RGB image. The images captured by RealSense D435 in a natural trellis environment were adopted to evaluate the performance of the proposed method. The experimental results showed that the recall and precision of the proposed method were 89.2% and 87.5%, respectively. The demonstrated performance indicated that the proposed method could satisfy the requirements of practical application for robotic grape harvesting.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yuval Nehoshtan ◽  
Elad Carmon ◽  
Omer Yaniv ◽  
Sharon Ayal ◽  
Or Rotem

AbstractAchieving seed germination quality standards poses a real challenge to seed companies as they are compelled to abide by strict certification rules, while having only partial seed separation solutions at their disposal. This discrepancy results with wasteful disqualification of seed lots holding considerable amounts of good seeds and further translates to financial losses and supply chain insecurity. Here, we present the first-ever generic germination prediction technology that is based on deep learning and RGB image data and facilitates seed classification by seed germinability and usability, two facets of germination fate. We show technology competence to render dozens of disqualified seed lots of seven vegetable crops, representing different genetics and production pipelines, industrially appropriate, and to adequately classify lots by utilizing available crop-level image data, instead of lot-specific data. These achievements constitute a major milestone in the deployment of this technology for industrial seed sorting by germination fate for multiple crops.


Sign in / Sign up

Export Citation Format

Share Document