scholarly journals AUTOMATIC MICROINJECTION SYSTEM USING STEREOSCOPIC MICROSCOPE

2014 ◽  
pp. 40-44
Author(s):  
Junko Sakiyama ◽  
Hideki Yamamoto

In this paper, we describe a visual feedback system using a stereoscopic microscope that controls a micromanipulator so that a needle head may pierce a target as much length as desired. At first, we developed an image processing algorithm for the tip of needle head to touch the target. Secondarily, we developed an algorithm for prediction of the tip position of the needle head within the target. By performing a preoperation, the shape of the needle head is preserved as a reference pattern. When the needle head piercing the target, the shape of the needle head within the target is predicted by pattern matching. Thus, we developed a microinjection system that axially pierces the target. Experimental results show that the proposed system may be useful in micromanipulation such as microinjection to brain areas in neuroanatomy.

In this paper, an adaptive visual feedback system and controller has been designed and implemented in real-time to control the movements of a line follower robot to be smoother and faster. The robot consists of a couple of motorized wheels, the real-time controller and a CMOS camera as the only sensor for detecting line and feedback. The measurement based on real-time image processing and motor drive feedback used in this robot makes it robust to the obstacles and surface disturbances that may deviate robot. The image processing algorithm is adaptive to the line’s color and width too. Image processing techniques have been implemented in real-time to detect the line in the image frame and extract the necessary information (like line’s edge, coordinates and angle). A NI myRIO module is used as a stand-alone hardware unit and RT (Real-Time) target for implementation of controllers and image processing in LabVIEW environment. Both results of real-time and non-real-time implementation of controllers have been compared. To show the performance of real-time image processing in the control of this robot, three types of controllers (i.e. P, PI and Fuzzy controllers) have been implemented for line following tests and the results have been compared. At the end, it was found that the fuzzy controller controls the robot movements smoother, faster, with less errors and quicker response time compare to the other controllers


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Soo Hyun Park ◽  
Sang Ha Noh ◽  
Michael J. McCarthy ◽  
Seong Min Kim

AbstractThis study was carried out to develop a prediction model for soluble solid content (SSC) of intact chestnut and to detect internal defects using nuclear magnetic resonance (NMR) relaxometry and magnetic resonance imaging (MRI). Inversion recovery and Carr–Purcell–Meiboom–Gill (CPMG) pulse sequences used to determine the longitudinal (T1) and transverse (T2) relaxation times, respectively. Partial least squares regression (PLSR) was adopted to predict SSCs of chestnuts with NMR data and histograms from MR images. The coefficient of determination (R2), root mean square error of prediction (RMSEP), ratio of prediction to deviation (RPD), and the ratio of error range (RER) of the optimized model to predict SSC were 0.77, 1.41 °Brix, 1.86, and 11.31 with a validation set. Furthermore, an image-processing algorithm has been developed to detect internal defects such as decay, mold, and cavity using MR images. The classification applied with the developed image processing algorithm was over 94% accurate to classify. Based on the results obtained, it was determined that the NMR signal could be applied for grading several levels by SSC, and MRI could be used to evaluate the internal qualities of chestnuts.


1995 ◽  
Vol 11 (5) ◽  
pp. 751-757 ◽  
Author(s):  
J. A. Throop ◽  
D. J. Aneshansley ◽  
B. L. Upchurch

2011 ◽  
Vol 36 (1) ◽  
pp. 48-57 ◽  
Author(s):  
Kwang-Wook Seo ◽  
Hyeon-Tae Kim ◽  
Dae-Weon Lee ◽  
Yong-Cheol Yoon ◽  
Dong-Yoon Choi

2017 ◽  
Vol 5 (1) ◽  
pp. 28-42 ◽  
Author(s):  
Iryna Borshchova ◽  
Siu O’Young

Purpose The purpose of this paper is to develop a method for a vision-based automatic landing of a multi-rotor unmanned aerial vehicle (UAV) on a moving platform. The landing system must be highly accurate and meet the size, weigh, and power restrictions of a small UAV. Design/methodology/approach The vision-based landing system consists of a pattern of red markers placed on a moving target, an image processing algorithm for pattern detection, and a servo-control for tracking. The suggested approach uses a color-based object detection and image-based visual servoing. Findings The developed prototype system has demonstrated the capability of landing within 25 cm of the desired point of touchdown. This auto-landing system is small (100×100 mm), light-weight (100 g), and consumes little power (under 2 W). Originality/value The novelty and the main contribution of the suggested approach are a creative combination of work in two fields: image processing and controls as applied to the UAV landing. The developed image processing algorithm has low complexity as compared to other known methods, which allows its implementation on general-purpose low-cost hardware. The theoretical design has been verified systematically via simulations and then outdoors field tests.


Sign in / Sign up

Export Citation Format

Share Document