image center
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 8)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Samer Kais Jameel ◽  
Sezgin Aydin ◽  
Nebras H. Ghaeb

<span lang="EN-US">Light penetrates the human eye through the cornea, which is the outer part of the eye, and then the cornea directs it to the pupil to determine the amount of light that reaches the lens of the eye. Accordingly, the human cornea must not be exposed to any damage or disease that may lead to human vision disturbances. Such damages can be revealed by topographic images used by ophthalmologists. Consequently, an important priority is the early and accurate diagnosis of diseases that may affect corneal integrity through the use of machine learning algorithms, particularly, use of local feature extractions for the image. Accordingly, we suggest a new algorithm called local information pattern (LIP) descriptor to overcome the lack of local binary patterns that loss of information from the image and solve the problem of image rotation. The LIP based on utilizing the sub-image center intensity for estimating neighbors' weights that can use to calculate what so-called contrast based centre (CBC). On the other hand, calculating local pattern (LP) for each block image, to distinguish between two sub-images having the same CBC. LP is the sum of transitions of neighbors' weights, from sub-image center value to one and vice versa. Finally, creating histograms for both CBC and LP, then blending them to represent a robust local feature vector. Which can use for diagnosing, detecting.</span>


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Yuanqin Liu ◽  
Qinglu Zhang ◽  
Lingchong Liu ◽  
Cuiling Li ◽  
Rongwei Zhang ◽  
...  

In order to study the influence of quantitative magnetic susceptibility mapping (QSM) on them. A 2.5D Attention U-Net Network based on multiple input and multiple output, a method for segmenting RN, SN, and STN regions in high-resolution QSM images is proposed, and deep learning realizes accurate segmentation of deep nuclei in brain QSM images. Experimental results show data first cuts each layer of 0 100 case data, based on the image center, from 384 × 288 to the size of 128 × 128. Image combination: each layer of the image in the layer direction combines with two adjacent images into a 2.5D image, i.e., (It − m It; It + i), where It represents the layer i image. At this time, the size of the image changes from 128 × 128 to 128 × 128 × 3, in which 3 represents three consecutive layers of images. The SNR of SWP I to STN is twice that of SWI. The small deep gray matter nuclei (RN, SN, and STN) in QSM images of the brain and the pancreas with irregular shape and large individual differences in abdominal CT images can be automatically segmented.


Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2125
Author(s):  
Jatin Upadhyay ◽  
Abhishek Rawat ◽  
Dipankar Deb

Autonomous unmanned aerial vehicles work seamlessly within the GPS signal range, but their performance deteriorates in GPS-denied regions. This paper presents a unique collaborative computer vision-based approach for target tracking as per the image’s specific location of interest. The proposed method tracks any object without considering its properties like shape, color, size, or pattern. It is required to keep the target visible and line of sight during the tracking. The method gives freedom of selection to a user to track any target from the image and form a formation around it. We calculate the parameters like distance and angle from the image center to the object for the individual drones. Among all the drones, the one with a significant GPS signal strength or nearer to the target is chosen as the master drone to calculate the relative angle and distance between an object and other drones considering approximate Geo-location. Compared to actual measurements, the results of tests done on a quadrotor UAV frame achieve 99% location accuracy in a robust environment inside the exact GPS longitude and latitude block as GPS-only navigation methods. The individual drones communicate to the ground station through a telemetry link. The master drone calculates the parameters using data collected at ground stations. Various formation flying methods help escort other drones to meet the desired objective with a single high-resolution first-person view (FPV) camera. The proposed method is tested for Airborne Object Target Tracking (AOT) aerial vehicle model and achieves higher tracking accuracy.


2021 ◽  
Vol 15 (1) ◽  
Author(s):  
Yang-Cheng Huang ◽  
Chia-Hao Tsai ◽  
Po-Chih Shih ◽  
Ching-Yuan Chen ◽  
Ming-Chih Ho ◽  
...  

Abstract In this paper, we present an integrated robotic arm with a flexible endoscope for laparoscopy. The endoscope holder is built to mimic a human operator that reacts to the surgeon's push while maintaining both the incision opening through the patient's body and the center of the endoscopic image. An impedance control algorithm is used to react to the surgeon's push when the robotic arm gets in the way. A modified software remote center-of-motion (RCM) constraint formulation then enables simultaneous RCM and impedance control. We derived the kinematic relationship between the robotic arm and line of sight of the flexible endoscope for image center control. Using this kinematic model, we integrated the task control for RCM and surgeon cooperation and the endoscope image centering into a semi-autonomous system. Implementation of the control algorithm with both matlab simulation and the HIWIN RA605-710 robotic arm with a MitCorp F500 flexible endoscope demonstrated the feasibility of the proposed algorithm.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Felix von Haxthausen ◽  
Jannis Hagenah ◽  
Mark Kaschwich ◽  
Markus Kleemann ◽  
Verónica García-Vázquez ◽  
...  

AbstractThe first choice in diagnostic imaging for patients suffering from peripheral arterial disease (PAD) is 2D ultrasound (US). However, for a proper imaging process, a skilled and experienced sonographer is required. Additionally, it is a highly user-dependent operation. A robotized US system that autonomously scans the peripheral arteries has the potential to overcome these limitations. In this work, we extend a previously proposed system by a hierarchical image analysis pipeline based on convolutional neural networks (CNNs) in order to control the robot. The system was evaluated by checking its feasibility to keep the vessel lumen of a leg phantom within the US image while scanning along the artery. In 100% of the images acquired during the scan process the whole vessel lumen was visible. While defining an insensitivity margin of 2.74 mm, the mean absolute distance between vessel center and the horizontal image center line was 2.47 mm and 3.90 mm for an easy and complex scenario, respectively. In conclusion, this system presents the basis for fully automatized peripheral artery imaging in humans using a radiation-free approach.


2020 ◽  
Vol 12 (7) ◽  
pp. 1175
Author(s):  
Xianwei Wang ◽  
David M. Holland

The Sentinel-1A satellite was launched in April 2014 with a primary C-Band terrain observation with progressive scans synthetic aperture radar (TOPSAR) onboard and has collected plenty of high-quality images for global change studies. However, low magnitude signals around image margins (black margins) does not preserve the normal standard level, influencing the potential usage with these data. Through image analysis, we find that the signal from black margin (BM) is highly dominated by the closest effective signals and the signal in BM shows an increasing trend along the direction from image boundary to image center. An edge detector is developed based on the signal characteristics of BM. Furthermore, an automatic method to discriminate and eliminate BM is designed. Images from both extra wide (EW) and interferometric wide (IW) swath observation modes, covering the land, ocean, and coast of the Antarctic, are taken to verify the robustness of our method. Through comparison with BM edges extracted by human interpretation, our method has the maximum BM edge extraction error of 1.9 ± 3.2 pixels. When considering perimeter (or area) difference along radial direction of BM edge, our method has an averaging extraction accuracy of −0.35 ± 0.11 (or 0.14 ± 1.38) pixels, which suggests that our method is effective and can be potentially used to eliminate BM for multidisciplinary applications of Sentinel-1 data.


2020 ◽  
Vol 10 (7) ◽  
pp. 2480
Author(s):  
Chao-Ming Yu ◽  
Yu-Hsien Lin

In this study, underwater recognition technology and a fuzzy control system were adopted to adjust the attitude and revolution speed of a self-developed autonomous underwater vehicle (AUV). To validate the functionality of visual-recognition control, an experiment was conducted in the towing tank at the Department of Systems and Naval Mechatronic Engineering, National Cheng Kung University. An underwater lighting box was towed by a towing carriage at low speed. By adding real-time contour approximation and a circle-fitting algorithm to the image-processing procedure, the relationship between the AUV and the underwater lighting box was calculated. Both rudder plane angles and propeller revolution speeds were determined after the size and location of the lighting box was measured in the image. Finally, AUV performance with visual-recognition control was verified by controlling the target object in the image center during passage.


2018 ◽  
Vol 17 (1) ◽  
pp. 79-86 ◽  
Author(s):  
N. A. Starasotnikau ◽  
R. V. Feodortsau

Accuracy in determination of coordinates for image having simple shapes is considered as one of important and significant parameters in metrological optoelectronic systems such as autocollimators, stellar sensors, Shack-Hartmann sensors, schemes for geometric calibration of digital cameras for aerial and space imagery, various tracking systems. The paper describes a mathematical model for a measuring stand based on a collimator which projects a test-object onto a photodetector of an optoelectronic device. The mathematical model takes into account characteristic noises for photodetectors: a shot noise of the desired signal (photon) and a shot noise of a dark signal, readout and spatial heterogeneity of CCD (charge-coupled device) matrix elements. In order to reduce noise effect it is proposed to apply the Wiener filter for smoothing an image and its unambiguous identification and also enter a threshold according to brightness level. The paper contains a comparison of two algorithms for determination of coordinates in accordance with energy gravity center and contour. Sobel, Pruitt, Roberts, Laplacian Gaussian, Canni detectors have been used for determination of the test-object contour. The essence of the algorithm for determination of coordinates lies in search for an image contour in the form of a circle with its subsequent approximation and determination of the image center. An error calculation has been made while determining coordinates of a gravity center for test-objects of various diameters: 5, 10, 20, 30, 40, 50 pixels of a photodetector and also signalto-noise ratio values: 200, 100, 70, 20, 10. Signal-to-noise ratio has been calculated as a difference between maximum image intensity of the test-object and the background which is divided by mean-square deviation of the background. The accuracy for determination of coordinates has been improved by 0.5-1 order in case when there was an increase in a signal-to-noise ratio. Accuracy improvement due to increase of a diameter in a test-object is typical for large signal-to-noise ratios: 70 or more. The conducted investigations have made it possible to establish that the algorithm for determination of coordinates of the energy gravity center is more accurate in comparison with contour methods and requires less computing power (for the MatLab software package), which is related to discreteness while determining a contour.


Sign in / Sign up

Export Citation Format

Share Document