contour information
Recently Published Documents


TOTAL DOCUMENTS

106
(FIVE YEARS 18)

H-INDEX

17
(FIVE YEARS 0)

2021 ◽  
Vol 14 (1) ◽  
pp. 26
Author(s):  
Weixin Li ◽  
Ming Li ◽  
Lei Zuo ◽  
Hao Sun ◽  
Hongmeng Chen ◽  
...  

Traditional forward-looking super-resolution methods mainly concentrate on enhancing the resolution with ground clutter or no clutter scenes. However, sea clutter exists in the sea-surface target imaging, as well as ground clutter when the imaging scene is a seacoast.Meanwhile, restoring the contour information of the target has an important effect, for example, in the autonomous landing on a ship. This paper aims to realize the forward-looking imaging of a sea-surface target. In this paper, a multi-prior Bayesian method, which considers the environment and fuses the contour information and the sparsity of the sea-surface target, is proposed. Firstly, due to the imaging environment in which more than one kind of clutter exists, we introduce the Gaussian mixture model (GMM) as the prior information to describe the interference of the clutter and noise. Secondly, we fuse the total variation (TV) prior and Laplace prior, and propose a multi-prior to model the contour information and sparsity of the target. Third, we introduce the latent variable to simplify the logarithm likelihood function. Finally, to solve the optimal parameters, the maximum posterior-expectation maximization (MAP-EM) method is utilized. Experimental results illustrate that the multi-prior Bayesian method can enhance the azimuth resolution, and preserve the contour information of the sea-surface target.


2021 ◽  
Author(s):  
Shudi Yang ◽  
Zhehan Chen ◽  
Jiaxiong Wu ◽  
Zhipeng Feng

Abstract Underwater vision research is the foundation of marine-related disciplines. The target contour extraction is of great significance to target tracking and visual information mining. Aiming at the problem that conventional active contour models cannot effectively extract the contours of salient targets in underwater images, we propose a dual-fusion active contour model with semantic information. First, the saliency images are introduced as semantic information, and extract salient target contours by fusing Chan–Vese and local binary fitting models. Then, the original underwater images are used to supplement the missing contour information by using the local image fitting. Compared with state-of-the-art contour extraction methods, our dual-fusion active contour model can effectively filter out background information and accurately extract salient target contours.


2021 ◽  
Vol 13 (21) ◽  
pp. 4231
Author(s):  
Fangfang Shen ◽  
Xuyang Chen ◽  
Yanming Liu ◽  
Yaocong Xie ◽  
Xiaoping Li

Conventional compressive sensing (CS)-based imaging methods allow images to be reconstructed from a small amount of data, while they suffer from high computational burden even for a moderate scene. To address this problem, this paper presents a novel two-dimensional (2D) CS imaging algorithm for strip-map synthetic aperture radars (SARs) with zero squint angle. By introducing a 2D separable formulation to model the physical procedure of the SAR imaging, we separate the large measurement matrix into two small ones, and then the induced algorithm can deal with 2D signal directly instead of converting it into 1D vector. As a result, the computational load can be reduced significantly. Furthermore, thanks to its superior performance in maintaining contour information, the gradient space of the SAR image is exploited and the total variation (TV) constraint is incorporated to improve resolution performance. Due to the non-differentiable property of the TV regularizer, it is difficult to directly solve the induced TV regularization problem. To overcome this problem, an improved split Bregman method is presented by formulating the TV minimization problem into a sequence of unconstrained optimization problem and Bregman updates. It yields an accurate and simple solution. Finally, the synthesis and real experiment results demonstrate that the proposed algorithm remains competitive in terms of high resolution and high computational efficiency.


Author(s):  
Zhihao Cui ◽  
Ting Zheng

Human–computer interaction systems have been developed in large numbers and quickly applied to sports. Badminton is the best sport for applying robotics because it requires quick recognition and fast movement. For the development of badminton recognition and tracking systems, it is important to accurately identify badminton, venues, and opponents. In this paper, we designed and developed a badminton recognition and tracking system using two 2 000 000-pixel high-speed cameras. The badminton tracking system has a transmission speed of 250[Formula: see text]fps and the maximum speed of the badminton resonator is 300[Formula: see text]km/h. The system uses the camera link interface Camera Link to capture images of high-speed cameras and process all captured images in real time using different regions of interest settings. In order to improve accuracy, we propose a new method for judging the center point of badminton. We have proposed a detector that detects the four corner points of the field by using the contour information of the badminton court when the approximate position of the badminton court is known. We set the sensing area according to the approximate position of the badminton court and use the histogram in the sensing area to select the point closest to the contour. Specify the intersection of the line as the corner point of the badminton court. The proposed angle detector has a high detection rate. It is more than 10 times more accurate than traditional detectors. The moving badminton is detected by an elliptical detector. We propose a method to find the center of the correct ellipse from the four candidates by selecting the four candidate contours of the ellipse. Compared to conventional circular detectors and points on three-dimensional coordinates, the proposed elliptical detector reduces the error by about 3[Formula: see text]mm.


2021 ◽  
Vol 15 ◽  
Author(s):  
Cheng Wan ◽  
Jiasheng Wu ◽  
Han Li ◽  
Zhipeng Yan ◽  
Chenghu Wang ◽  
...  

In recent years, an increasing number of people have myopia in China, especially the younger generation. Common myopia may develop into high myopia. High myopia causes visual impairment and blindness. Parapapillary atrophy (PPA) is a typical retinal pathology related to high myopia, which is also a basic clue for diagnosing high myopia. Therefore, accurate segmentation of the PPA is essential for high myopia diagnosis and treatment. In this study, we propose an optimized Unet (OT-Unet) to solve this important task. OT-Unet uses one of the pre-trained models: Visual Geometry Group (VGG), ResNet, and Res2Net, as a backbone and is combined with edge attention, parallel partial decoder, and reverse attention modules to improve the segmentation accuracy. In general, using the pre-trained models can improve the accuracy with fewer samples. The edge attention module extracts contour information, the parallel partial decoder module combines the multi-scale features, and the reverse attention module integrates high- and low-level features. We also propose an augmented loss function to increase the weight of complex pixels to enable the network to segment more complex lesion areas. Based on a dataset containing 360 images (Including 26 pictures provided by PALM), the proposed OT-Unet achieves a high AUC (Area Under Curve) of 0.9235, indicating a significant improvement over the original Unet (0.7917).


2021 ◽  
Vol 13 (6) ◽  
pp. 1049
Author(s):  
Cheng Liao ◽  
Han Hu ◽  
Haifeng Li ◽  
Xuming Ge ◽  
Min Chen ◽  
...  

Most of the existing approaches to the extraction of buildings from high-resolution orthoimages consider the problem as semantic segmentation, which extracts a pixel-wise mask for buildings and trains end-to-end with manually labeled building maps. However, as buildings are highly structured, such a strategy suffers several problems, such as blurred boundaries and the adhesion to close objects. To alleviate the above problems, we proposed a new strategy that also considers the contours of the buildings. Both the contours and structures of the buildings are jointly learned in the same network. The contours are learnable because the boundary of the mask labels of buildings implicitly represents the contours of buildings. We utilized the building contour information embedded in the labels to optimize the representation of building boundaries, then combined the contour information with multi-scale semantic features to enhance the robustness to image spatial resolution. The experimental results showed that the proposed method achieved 91.64%, 81.34%, and 74.51% intersection over union (IoU) on the WHU, Aerial, and Massachusetts building datasets, and outperformed the state-of-the-art (SOTA) methods. It significantly improved the accuracy of building boundaries, especially for the edges of adjacent buildings. The code is made publicly available.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0242581
Author(s):  
Maddex Farshchi ◽  
Alexandra Kiba ◽  
Tadamasa Sawada

Artists can represent a 3D object by using only contours in a 2D drawing. Prior studies have shown that people can use such drawings to perceive 3D shapes reliably, but it is not clear how useful this kind of contour information actually is in a real dynamical scene in which people interact with objects. To address this issue, we developed an Augmented Reality (AR) device that can show a participant a contour-drawing or a grayscale-image of a real dynamical scene in an immersive manner. We compared the performance of people in a variety of run-of-the-mill tasks with both contour-drawings and grayscale-images under natural viewing conditions in three behavioral experiments. The results of these experiments showed that the people could perform almost equally well with both types of images. This contour information may be sufficient to provide the basis for our visual system to obtain much of the 3D information needed for successful visuomotor interactions in our everyday life.


Sign in / Sign up

Export Citation Format

Share Document