scholarly journals The Detection of Thread Roll’s Margin Based on Computer Vision

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6331
Author(s):  
Zhiwei Shi ◽  
Weimin Shi ◽  
Junru Wang

The automatic detection of the thread roll’s margin is one of the kernel problems in the textile field. As the traditional detection method based on the thread’s tension has the disadvantages of high cost and low reliability, this paper proposes a technology that installs a camera on a mobile robot and uses computer vision to detect the thread roll‘s margin. Before starting, we define a thread roll‘s margin as follows: The difference between the thread roll‘s radius and the bobbin’s radius. Firstly, we capture images of the thread roll‘s end surface. Secondly, we obtain the bobbin’s image coordinates by calculating the image’s convolutions with a Circle Gradient Operator. Thirdly, we fit the thread roll and bobbin’s contours into ellipses, and then delete false detections according to the bobbin’s image coordinates. Finally, we restore every sub-image of the thread roll by a perspective transformation method, and establish the conversion relationship between the actual size and pixel size. The difference value of the two concentric circles’ radii is the thread roll’s margin. However, there are false detections and these errors may be more than 19.4 mm when the margin is small. In order to improve the precision and delete false detections, we use deep learning to detect thread roll and bobbin’s radii and then can calculate the thread roll’s margin. After that, we fuse the two results. However, the deep learning method also has some false detections. As such, in order to eliminate the false detections completely, we estimate the thread roll‘s margin according to thread consumption speed. Lastly, we use a Kalman Filter to fuse the measured value and estimated value; the average error is less than 5.7 mm.

Agronomy ◽  
2020 ◽  
Vol 10 (4) ◽  
pp. 590
Author(s):  
Zhenqian Zhang ◽  
Ruyue Cao ◽  
Cheng Peng ◽  
Renjie Liu ◽  
Yifan Sun ◽  
...  

A cut-edge detection method based on machine vision was developed for obtaining the navigation path of a combine harvester. First, the Cr component in the YCbCr color model was selected as the grayscale feature factor. Then, by detecting the end of the crop row, judging the target demarcation and getting the feature points, the region of interest (ROI) was automatically gained. Subsequently, the vertical projection was applied to reduce the noise. All the points in the ROI were calculated, and a dividing point was found in each row. The hierarchical clustering method was used to extract the outliers. At last, the polynomial fitting method was used to acquire the straight or curved cut-edge. The results gained from the samples showed that the average error for locating the cut-edge was 2.84 cm. The method was capable of providing support for the automatic navigation of a combine harvester.


2021 ◽  
Vol 1966 (1) ◽  
pp. 012051
Author(s):  
Shuai Zou ◽  
Fangwei Zhong ◽  
Bing Han ◽  
Hao Sun ◽  
Tao Qian ◽  
...  

2021 ◽  
Vol 109 (5) ◽  
pp. 863-890
Author(s):  
Yannis Panagakis ◽  
Jean Kossaifi ◽  
Grigorios G. Chrysos ◽  
James Oldfield ◽  
Mihalis A. Nicolaou ◽  
...  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


2021 ◽  
Vol 11 (10) ◽  
pp. 4589
Author(s):  
Ivan Duvnjak ◽  
Domagoj Damjanović ◽  
Marko Bartolac ◽  
Ana Skender

The main principle of vibration-based damage detection in structures is to interpret the changes in dynamic properties of the structure as indicators of damage. In this study, the mode shape damage index (MSDI) method was used to identify discrete damages in plate-like structures. This damage index is based on the difference between modified modal displacements in the undamaged and damaged state of the structure. In order to assess the advantages and limitations of the proposed algorithm, we performed experimental modal analysis on a reinforced concrete (RC) plate under 10 different damage cases. The MSDI values were calculated through considering single and/or multiple damage locations, different levels of damage, and boundary conditions. The experimental results confirmed that the MSDI method can be used to detect the existence of damage, identify single and/or multiple damage locations, and estimate damage severity in the case of single discrete damage.


2021 ◽  
pp. 136943322098663
Author(s):  
Diana Andrushia A ◽  
Anand N ◽  
Eva Lubloy ◽  
Prince Arulraj G

Health monitoring of concrete including, detecting defects such as cracking, spalling on fire affected concrete structures plays a vital role in the maintenance of reinforced cement concrete structures. However, this process mostly uses human inspection and relies on subjective knowledge of the inspectors. To overcome this limitation, a deep learning based automatic crack detection method is proposed. Deep learning is a vibrant strategy under computer vision field. The proposed method consists of U-Net architecture with an encoder and decoder framework. It performs pixel wise classification to detect the thermal cracks accurately. Binary Cross Entropy (BCA) based loss function is selected as the evaluation function. Trained U-Net is capable of detecting major thermal cracks and minor thermal cracks under various heating durations. The proposed, U-Net crack detection is a novel method which can be used to detect the thermal cracks developed on fire exposed concrete structures. The proposed method is compared with the other state-of-the-art methods and found to be accurate with 78.12% Intersection over Union (IoU).


2021 ◽  
Vol 11 (12) ◽  
pp. 5488
Author(s):  
Wei Ping Hsia ◽  
Siu Lun Tse ◽  
Chia Jen Chang ◽  
Yu Len Huang

The purpose of this article is to evaluate the accuracy of the optical coherence tomography (OCT) measurement of choroidal thickness in healthy eyes using a deep-learning method with the Mask R-CNN model. Thirty EDI-OCT of thirty patients were enrolled. A mask region-based convolutional neural network (Mask R-CNN) model composed of deep residual network (ResNet) and feature pyramid networks (FPNs) with standard convolution and fully connected heads for mask and box prediction, respectively, was used to automatically depict the choroid layer. The average choroidal thickness and subfoveal choroidal thickness were measured. The results of this study showed that ResNet 50 layers deep (R50) model and ResNet 101 layers deep (R101). R101 U R50 (OR model) demonstrated the best accuracy with an average error of 4.85 pixels and 4.86 pixels, respectively. The R101 ∩ R50 (AND model) took the least time with an average execution time of 4.6 s. Mask-RCNN models showed a good prediction rate of choroidal layer with accuracy rates of 90% and 89.9% for average choroidal thickness and average subfoveal choroidal thickness, respectively. In conclusion, the deep-learning method using the Mask-RCNN model provides a faster and accurate measurement of choroidal thickness. Comparing with manual delineation, it provides better effectiveness, which is feasible for clinical application and larger scale of research on choroid.


Sign in / Sign up

Export Citation Format

Share Document