edge points
Recently Published Documents


TOTAL DOCUMENTS

147
(FIVE YEARS 35)

H-INDEX

10
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Xiaomin Zhang ◽  
Chuanzhen Zheng ◽  
Kaixuan Wang ◽  
Kailei Guo ◽  
Qingqin Tao ◽  
...  

Abstract IntroductionThe aim of this study was to report the clinical profile and outcomes of retinal pigment epithelial detachment (PED) in Vogt-Koyanagi-Harada (VKH) disease, and to evaluate the correlation between PED and the subsequent development of central serous chorioretinopathy (CSC) throughout the whole corticosteroid treatment course.materials and methodsA total of 470 eyes with VKH were reviewed, and 12 eyes with VKH and PED were recruited. Patients were divided into two groups according to the CSC onset or not throughout the whole course (CSC group and non-CSC group). Best-corrected visual acuity (BCVA) improvement, and PED angle (PEDA, the angle between the two lines of the vertex of the lifted retinal pigment epithelium to the two edge points of the Bruch membrane) were compared between the two groups.ResultsThe prevalence of PED and CSC in VKH was 2.55% (12/470) and 1.06% (5/470), respectively. BCVA improvement in the non-CSC group was greater than that in the CSC group, but without a statistical difference (P=0.25). PEDA was significantly smaller in the CSC group than in the non-CSC group (P=0.03).DiscussionPEDA is an ideal parameter to reflect hydrostatic pressure and stretches for RPE. As PED predisposes to the development of CSC in selected VKH eyes, PEDA may be a valuable predictive factor for the development of CSC in VKH patients.


2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
Feng Chen ◽  
Botao Yang

Image super-resolution is getting popularity these days in diverse fields, such as medical applications and industrial applications. The accuracy is imperative on image super-resolution. The traditional approaches for local edge feature point extraction algorithms are merely based on edge points for super-resolution images. The traditional algorithms are used to calculate the geometric center of gravity of the edge line when it is near, resulting in a low feature recall rate and unreliable results. In order to overcome these problems of lower accuracy in the existing system, an attempt is made in this research work to propose a new fast extraction algorithm for local edge features of super-resolution images. This paper primarily focuses on the super-resolution image reconstruction model, which is utilized to extract the super-resolution image. The edge contour of the super-resolution image feature is extracted based on the Chamfer distance function. Then, the geometric center of gravity of the closed edge line and the nonclosed edge line are calculated. The algorithm emphasizes on polarizing the edge points with the center of gravity to determine the local extreme points of the upper edge of the amplitude-diameter curve and to determine the feature points of the edges of the super-resolution image. The experimental results show that the proposed algorithm consumes 0.02 seconds to extract the local edge features of super-resolution images with an accuracy of up to 96.3%. The experimental results show that our proposed algorithm is an efficient method for the extraction of local edge features from the super-resolution images.


2021 ◽  
Vol 5 (6) ◽  
pp. 1036-1043
Author(s):  
Ardi wijaya ◽  
Puji Rahayu ◽  
Rozali Toyib

Problems in image processing to obtain the best smile are strongly influenced by the quality, background, position, and lighting, so it is very necessary to have an analysis by utilizing existing image processing algorithms to get a system that can make the best smile selection, then the Shi-Tomasi Algorithm is used. the algorithm that is commonly used to detect the corners of the smile region in facial images. The Shi-Tomasi angle calculation processes the image effectively from a target image in the edge detection ballistic test, then a corner point check is carried out on the estimation of translational parameters with a recreation test on the translational component to identify the cause of damage to the image, it is necessary to find the edge points to identify objects with remove noise in the image. The results of the test with the shi-Tomasi algorithm were used to detect a good smile from 20 samples of human facial images with each sample having 5 different smile images, with test data totaling 100 smile images, the success of the Shi-Tomasi Algorithm in detecting a good smile reached an accuracy value of 95% using the Confusion Matrix, Precision, Recall and Accuracy Methods.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3084
Author(s):  
Andrea Raffo ◽  
Silvia Biasotti

The approximation of curvilinear profiles is very popular for processing digital images and leads to numerous applications such as image segmentation, compression and recognition. In this paper, we develop a novel semi-automatic method based on quasi-interpolation. The method consists of three steps: a preprocessing step exploiting an edge detection algorithm; a splitting procedure to break the just-obtained set of edge points into smaller subsets; and a final step involving the use of a local curve approximation, the Weighted Quasi Interpolant Spline Approximation (wQISA), chosen for its robustness to data perturbation. The proposed method builds a sequence of polynomial spline curves, connected C0 in correspondence of cusps, G1 otherwise. To curb underfitting and overfitting, the computation of local approximations exploits the supervised learning paradigm. The effectiveness of the method is shown with simulation on real images from various application domains.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Chengshan Yang ◽  
Jingbing Li ◽  
Uzair Aslam Bhatti ◽  
Jing Liu ◽  
Jixin Ma ◽  
...  

Digital medical system not only facilitates the storage and transmission of medical information but also brings information security problems. Aiming at the security of medical images, a robust zero watermarking algorithm for medical images based on Zernike-DCT is proposed. The algorithm first uses a chaotic logic sequence to preprocess and encrypt the watermark, then performs edge detection and Zernike moment processing on the original medical image to get the accurate edge points, and then performs discrete cosine transform (DCT) on them to get the feature vector. Finally, it combines perceptual Hash and zero watermark technology to generate the key to complete the watermark embedding and extraction. The algorithm has good robustness to conventional and geometric attacks, strong antinoise ability, high positioning accuracy, and processing efficiency and is superior to the classical edge detection algorithm in extraction effect. It is a stable and reliable image edge detection algorithm.


2021 ◽  
Vol 2101 (1) ◽  
pp. 012034
Author(s):  
Zhiqiang Yu ◽  
Mao Zhang ◽  
Jiaoyu Xiao

Abstract In modern industry, multi-sensor metrology methods are increasingly applied for fast and accurate 3D data acquisition. These method typically start with fast initial digitization by an optical digitizer, the obtained 3D data is analyzed to extract information to provide guidance for precise re-digitization and multi-sensor data fusion. The raw output measurement data from optical digitizer is dense unsorted points with defects. Therefore a new method of analysis has to be developed to process the data and prepare it for metrological verification. This article presents a novel algorithm to manage measured data from optical systems. A robust edge-points recognition method is proposed to segment edge-points from a 3D point cloud. The remaining point cloud is then divided into different patches by applying the Euclidean distance clustering. A simple RANSAC-based method is used to identify the feature of each segmented data patch and derive the parameters. Subsequently, a special region growing algorithm is designed to refine segment the under-segmentation regions. The proposed method is experimentally validated on various industrial components. Comparisons with state-of-the-art methods indicate that the proposed method for feature surface extraction is feasible and capable of achieving favorable performance and facilitating automation of industrial components.


2021 ◽  
Vol 13 (13) ◽  
pp. 2526
Author(s):  
Weite Li ◽  
Kyoko Hasegawa ◽  
Liang Li ◽  
Akihiro Tsukamoto ◽  
Satoshi Tanaka

Large-scale 3D-scanned point clouds enable the accurate and easy recording of complex 3D objects in the real world. The acquired point clouds often describe both the surficial and internal 3D structure of the scanned objects. The recently proposed edge-highlighted transparent visualization method is effective for recognizing the whole 3D structure of such point clouds. This visualization utilizes the degree of opacity for highlighting edges of the 3D-scanned objects, and it realizes clear transparent viewing of the entire 3D structures. However, for 3D-scanned point clouds, the quality of any edge-highlighting visualization depends on the distribution of the extracted edge points. Insufficient density, sparseness, or partial defects in the edge points can lead to unclear edge visualization. Therefore, in this paper, we propose a deep learning-based upsampling method focusing on the edge regions of 3D-scanned point clouds to generate more edge points during the 3D-edge upsampling task. The proposed upsampling network dramatically improves the point-distributional density, uniformity, and connectivity in the edge regions. The results on synthetic and scanned edge data show that our method can improve the percentage of edge points more than 15% compared to the existing point cloud upsampling network. Our upsampling network works well for both sharp and soft edges. A combined use with a noise-eliminating filter also works well. We demonstrate the effectiveness of our upsampling network by applying it to various real 3D-scanned point clouds. We also prove that the improved edge point distribution can improve the visibility of the edge-highlighted transparent visualization of complex 3D-scanned objects.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xiaokang Yu ◽  
Zhiwen Wang ◽  
Yuhang Wang ◽  
Canlong Zhang

The traditional canny edge detection algorithm has its limitations in the aspect of antinoise interference, and it is susceptible to factors such as light. To solve these defects, the Canny algorithm based on morphological improvement was proposed and applied to the detection of agricultural products. First, the algorithm uses the open and close operation of morphology to form a morphological filter instead of the Gaussian filter, which can remove the image noise and strengthen the protection of image edge. Second, the traditional Canny operator is improved to increase the horizontal and vertical templates to 45° and 135° to improve the edge positioning of the image. Finally, the adaptive threshold segmentation method is used for rough segmentation, and on this basis, double detection thresholds are used for further segmentation to obtain the final edge points. The experimental results show that compared with the traditional algorithm applied to the edge detection of agricultural products, this algorithm can effectively avoid the false contour caused by illumination and other factors and effectively improve the antinoise interference while more accurate and fine detection of the edge of real agricultural products.


2021 ◽  
Vol 10 (4) ◽  
pp. 229
Author(s):  
Maria Melina Dolapsaki ◽  
Andreas Georgopoulos

This paper presents an effective and semi-automated method for detecting 3D edges in 3D point clouds with the help of high-resolution digital images. The effort aims to contribute towards addressing the unsolved problem of automated production of vector drawings from 3D point clouds of cultural heritage objects. Edges are the simplest primitives to detect in an unorganized point cloud and an algorithm was developed to perform this task. The provided edges are defined and measured on 2D digital images of known orientation, and the algorithm determines the plane defined by the edge on the image and its perspective center. This is accomplished by applying suitable transformations to the image coordinates of the edge points based on the Analytical Geometry relationships and properties of planes in 3D space. This plane inevitably contains the 3D points of the edge in the point cloud. The algorithm then detects and isolates those points which define the edge in the world system. Finally, the goal is to reliably locate the points that describe the desired edge in their true position in the geodetic space, using several constraints. The algorithm is firstly investigated theoretically for its efficiency using simulation data and then assessed under real conditions and under different image orientations and lengths of the edge on the image. The results are presented and evaluated.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2375
Author(s):  
Jingjing Xiong ◽  
Lai-Man Po ◽  
Kwok Wai Cheung ◽  
Pengfei Xian ◽  
Yuzhi Zhao ◽  
...  

Deep reinforcement learning (DRL) has been utilized in numerous computer vision tasks, such as object detection, autonomous driving, etc. However, relatively few DRL methods have been proposed in the area of image segmentation, particularly in left ventricle segmentation. Reinforcement learning-based methods in earlier works often rely on learning proper thresholds to perform segmentation, and the segmentation results are inaccurate due to the sensitivity of the threshold. To tackle this problem, a novel DRL agent is designed to imitate the human process to perform LV segmentation. For this purpose, we formulate the segmentation problem as a Markov decision process and innovatively optimize it through DRL. The proposed DRL agent consists of two neural networks, i.e., First-P-Net and Next-P-Net. The First-P-Net locates the initial edge point, and the Next-P-Net locates the remaining edge points successively and ultimately obtains a closed segmentation result. The experimental results show that the proposed model has outperformed the previous reinforcement learning methods and achieved comparable performances compared with deep learning baselines on two widely used LV endocardium segmentation datasets, namely Automated Cardiac Diagnosis Challenge (ACDC) 2017 dataset, and Sunnybrook 2009 dataset. Moreover, the proposed model achieves higher F-measure accuracy compared with deep learning methods when training with a very limited number of samples.


Sign in / Sign up

Export Citation Format

Share Document