Automatic recognition of lactating sow postures from depth images by deep learning detector

2018 ◽  
Vol 147 ◽  
pp. 51-63 ◽  
Author(s):  
Chan Zheng ◽  
Xunmu Zhu ◽  
Xiaofan Yang ◽  
Lina Wang ◽  
Shuqin Tu ◽  
...  
Author(s):  
Yi Liu ◽  
Ming Cong ◽  
Hang Dong ◽  
Dong Liu

Purpose The purpose of this paper is to propose a new method based on three-dimensional (3D) vision technologies and human skill integrated deep learning to solve assembly positioning task such as peg-in-hole. Design/methodology/approach Hybrid camera configuration was used to provide the global and local views. Eye-in-hand mode guided the peg to be in contact with the hole plate using 3D vision in global view. When the peg was in contact with the workpiece surface, eye-to-hand mode provided the local view to accomplish peg-hole positioning based on trained CNN. Findings The results of assembly positioning experiments proved that the proposed method successfully distinguished the target hole from the other same size holes according to the CNN. The robot planned the motion according to the depth images and human skill guide line. The final positioning precision was good enough for the robot to carry out force controlled assembly. Practical implications The developed framework can have an important impact on robotic assembly positioning process, which combine with the existing force-guidance assembly technology as to build a whole set of autonomous assembly technology. Originality/value This paper proposed a new approach to the robotic assembly positioning based on 3D visual technologies and human skill integrated deep learning. Dual cameras swapping mode was used to provide visual feedback for the entire assembly motion planning process. The proposed workpiece positioning method provided an effective disturbance rejection, autonomous motion planning and increased overall performance with depth images feedback. The proposed peg-hole positioning method with human skill integrated provided the capability of target perceptual aliasing avoiding and successive motion decision for the robotic assembly manipulation.


2021 ◽  
Author(s):  
Kazuyuki Kaneda ◽  
Tatsuya Ooba ◽  
Hideki Shimada ◽  
Osamu Shiku ◽  
Yuji Teshima

2020 ◽  
Vol 8 ◽  
Author(s):  
Sohaib Younis ◽  
Marco Schmidt ◽  
Claus Weiland ◽  
Stefan Dressler ◽  
Bernhard Seeger ◽  
...  

As herbarium specimens are increasingly becoming digitised and accessible in online repositories, advanced computer vision techniques are being used to extract information from them. The presence of certain plant organs on herbarium sheets is useful information in various scientific contexts and automatic recognition of these organs will help mobilise such information. In our study, we use deep learning to detect plant organs on digitised herbarium specimens with Faster R-CNN. For our experiment, we manually annotated hundreds of herbarium scans with thousands of bounding boxes for six types of plant organs and used them for training and evaluating the plant organ detection model. The model worked particularly well on leaves and stems, while flowers were also present in large numbers in the sheets, but were not equally well recognised.


2021 ◽  
Author(s):  
Jingwei Yang ◽  
Yikang Wang ◽  
Chong Li ◽  
Wei Han ◽  
Weiwei Liu ◽  
...  

Background: Pronuclear assessment appears to have the ability to distinguish good and bad embryos in the zygote stage,but paradoxical results were obtained in clinical studies.This situation might be caused by the robust qualitative detection of the development of dynamic pronuclei. Here,we aim to establish a quantitative pronuclear measurement method by applying expert experience deep learning from large annotated datasets. Methods: Convinced handle-annotated 2PN images(13419) were used for deep learning then corresponded errors were recorded through handle check for subsequent parameters adjusting. We used 790 embryos with 52479 PN images from 155 patients for analysis the area of pronuclei and the preimplantation genetic test results.Establishment of the exponential fitting equation and the key coefficient β1 was extracted from the model for quantitative analysis for pronuclear(PN) annotation and automatic recognition. Findings: Based on the female original PN coefficient β1,the chromosome normal rate in the blastocyst with biggest PN area is much higher than that of the blastocyst with smallest PN area(58.06% vs.45.16%, OR=1.68[1.07-2.64];P=0.031).After adjusting coefficient β1 by the first three frames which high variance of outlier PN areas was removed, coefficient β1 at 12 hours and at 14 hours post-insemination,similar but stronger evidence was obtained. All these discrepancies resulted from the female propositus in the PGT(SR) subgroup and smaller chromosomal errors. Conclusion(s): The results suggest that detailed analysis of the images of embryos could improve our understanding of developmental biology. Funding: None


Sign in / Sign up

Export Citation Format

Share Document