image acquisition device
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 6)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Vol 22 (11) ◽  
pp. 839-847
Author(s):  
Hyeon-Chae Yoo ◽  
Jong-Guk Lim ◽  
Ah-Yeong Lee ◽  
Bal-Geum Kim ◽  
Young-Wook Seo ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6185
Author(s):  
Damjan Zadnik ◽  
Andrej Žemva

In this work, we present an eye-image acquisition device that can be used as an image acquisition front-end application in compact, low-cost, and easy-to-integrate products for smart-city access control applications, based on iris recognition. We present the advantages and disadvantages of iris recognition compared to fingerprint- or face recognition. We also present the main drawbacks of the existing commercial solutions and propose a concept device design for door-mounted access control systems based on iris recognition technology. Our eye-image acquisition device was built around a low-cost camera module. An integrated infrared distance measurement was used for active image focusing. FPGA image processing was used for raw-RGB to grayscale demosaicing and passive image focusing. The integrated visible light illumination meets the IEC62471 photobiological safety standard. According to our results, we present the operation of the distance-measurement subsystem, the operation of the image-focusing subsystem, examples of acquired images of an artificial toy eye under different illumination conditions, and the calculation of illumination exposure hazards. We managed to acquire a sharp image of an artificial toy eye sized 22 mm in diameter from an approximate distance of 10 cm, with 400 pixels over the iris diameter, an average acquisition time of 1 s, and illumination below hazardous exposure levels.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Shuai Mu ◽  
Haibo Qin ◽  
Jia Wei ◽  
Qingkang Wen ◽  
Sihan Liu ◽  
...  

Sheep body segmentation robot can improve production hygiene, product quality, and cutting accuracy, which is a huge change for traditional manual segmentation. With reference to the New Zealand sheep body segmentation specification, a vision system for Cartesian coordinate robot cutting half-sheep was developed and tested. The workflow of the vision system was designed and the image acquisition device with an Azure Kinect sensor was developed. Furthermore, a LabVIEW software with the image processing algorithm was then integrated with the RGBD image acquisition device in order to construct an automatic vision system. Based on Deeplab v3+ networks, an image processing system for locating ribs and spine was employed. Taking advantage of the location characteristics of ribs and spine in the split half-sheep, a calculation method of cutting line based on the key points is designed to determine five cutting curves. The seven key points are located by convex points of ribs and spine and the root of hind leg. Using the conversion relation between depth image and the space coordinates, the 3D coordinates of the curves were computed. Finally, the kinematics equation of the rectangular coordinate robot arm is established, and the 3D coordinates of the curves are converted into the corresponding motion parameters of the robot arm. The experimental results indicated that the automatic vision system had a success rate of 98.4% in the cutting curves location, 4.2 s time consumption per half-sheep, and approximately 1.3 mm location error. The positioning accuracy and speed of the vision system can meet the requirements of the sheep cutting production line. The vision system shows that there is potential to automate even the most challenging processing operations currently carried out manually by human operators.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Xu Shengyong ◽  
Peng Biye ◽  
Wu Haiyang ◽  
Li Fushuai ◽  
Cai Xingkui ◽  
...  

In manually propagating potato test-tube plantlets (PTTPs), the plantlet is usually grasped and cut at the node point between the cotyledon and stem, which is hardly located and is easily damaged by the gripper. Using an agricultural intelligent robot to replace manual operation will greatly improve the efficiency and quality of the propagation of PTTPs. An automatic machine vision-guided system for the propagation of PTTPs was developed and tested. In this paper, the workflow of the visual system was designed and the image acquisition device was made. Furthermore, the image processing algorithm was then integrated with the image acquisition device in order to construct an automatic PTTP propagation vision system. An image processing system for locating a node point was employed to determine a suitable operation point on the stem. A binocular stereo vision algorithm was applied to compute the 3D coordinates of node points. Finally, the kinematics equation of the three-axis parallel manipulator was established, and the three-dimensional coordinates of the nodes were transformed into the corresponding parameters X, Y, and Z of the three driving sliders of the manipulator. The experimental results indicated that the automatic vision system had a success rate of 98.4%, 0.68 s time consumed per 3 plants, and approximate 1 mm location error in locating the plantlets in an appropriate position for the medial expansion period (22 days).


Author(s):  
Maoqi Dong ◽  
Xingguang Wang ◽  
Tao Liang ◽  
Guoqing Yang ◽  
Chuangyou Zhang ◽  
...  

2014 ◽  
Vol 2 (3) ◽  
pp. 15-25
Author(s):  
Dongmei Zheng ◽  
Zhendong Dai ◽  
Hongmo Wang

Colour inspection in traditional Chinese medicine (CITCM) has been conducted for thousands of years to diagnose illness. However, CITCM has limitations in accuracy and quantity of illness depiction. Thus, a system for an objective CITCM is developed in this study. In this scheme, the entire system includes two parts, namely, hardware and software. The hardware is an image acquisition device in standard lighting conditions. The software is used for a three-step digital image processing: firstly, the skin/non-skin classification is performed; secondly, the facial features are localised; finally, the facial special region corresponding to five internal organs are achieved. The chromaticity is subsequently calculated using 100 samples. Experimental results demonstrate that the proposed scheme exhibits better performance than existing methods towards objective CITCM research.


Sign in / Sign up

Export Citation Format

Share Document