scholarly journals Robust Analysis and Laser Stripe Center Extraction for Rail Images

2021 ◽  
Vol 11 (5) ◽  
pp. 2038
Author(s):  
Huiping Gao ◽  
Guili Xu

In this paper, a novel method for the effective extraction of the light stripes in rail images is proposed. First, a preprocessing procedure that includes self-adaptive threshold segmentation and brightness enhancement is adopted to improve the quality of the rail image. Secondly, center of mass is utilized to detect the center point of each row of the image. Then, to speed up the procedure of centerline optimization, the detected center-points are segmented into several parts based on the geometry of the rail profile. Finally, piecewise fitting is adopted to obtain a smooth and robust centerline. The performance of this method is analyzed in detail, and experimental results show that the proposed method works well for rail images.

2014 ◽  
Vol 610 ◽  
pp. 358-361
Author(s):  
Hong Wei Di ◽  
Wei Xu

To solve the problem that traditional threshold segmentation model is not very robust in skin segmentation under different skin colors and different illuminations, an improved adaptive skin color model is proposed. This model detects the change rate of the skin color pixels by modifying the certain threshold while fixing others, then selects the optimum threshold adaptively. The experimental results show that this algorithm can effectively distinguish skin color regions and background regions, and has strong robustness on light disturbance.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Xueping Su ◽  
Meng Gao ◽  
Jie Ren ◽  
Yunhong Li ◽  
Matthias Rätsch

With the continuous development of economy, consumers pay more attention to the demand for personalization clothing. However, the recommendation quality of the existing clothing recommendation system is not enough to meet the user’s needs. When browsing online clothing, facial expression is the salient information to understand the user’s preference. In this paper, we propose a novel method to automatically personalize clothing recommendation based on user emotional analysis. Firstly, the facial expression is classified by multiclass SVM. Next, the user’s multi-interest value is calculated using expression intensity that is obtained by hybrid RCNN. Finally, the multi-interest value is fused to carry out personalized recommendation. The experimental results show that the proposed method achieves a significant improvement over other algorithms.


2013 ◽  
Vol 325-326 ◽  
pp. 1571-1575
Author(s):  
Fang Wang ◽  
Zong Wei Yang ◽  
De Ren Kong ◽  
Yun Fei Jia

Shadowgraph is an important method to obtain the flight characteristics of high-speed object, such as attitude and speed etc. To get the contour information of objects and coordinates of feature points from shadowgraph are the precondition of characteristics analysis. Current digital shadowgraph system composed of CCD camera and pulsed laser source is widely used, but still lack of the corresponding method in image processing. Therefore, the selection of an effective processing method in order to ensure high effectiveness and accuracy of image data interpretation is an urgent need to be solved. According to the features of shadowgraph, a processing method to realize the contour extraction of high-speed object by adaptive threshold segmentation is proposed based on median filtering in this paper, and verified with the OpenCV in VC environment, the identification process of the feature points are recognized. The result indicates that by using this method, contours of high-speed objects can be detected nicely, to combine relevant algorithm, the pixel coordinates of feature points such as the center of mass can be recognized accurately.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Ruixin Ma ◽  
Junying Lou ◽  
Peng Li ◽  
Jing Gao

Generating pictures from text is an interesting, classic, and challenging task. Benefited from the development of generative adversarial networks (GAN), the generation quality of this task has been greatly improved. Many excellent cross modal GAN models have been put forward. These models add extensive layers and constraints to get impressive generation pictures. However, complexity and computation of existing cross modal GANs are too high to be deployed in mobile terminal. To solve this problem, this paper designs a compact cross modal GAN based on canonical polyadic decomposition. We replace an original convolution layer with three small convolution layers and use an autoencoder to stabilize and speed up training. The experimental results show that our model achieves 20% times of compression in both parameters and FLOPs without loss of quality on generated images.


2018 ◽  
Vol 7 (3) ◽  
pp. 82-85
Author(s):  
A. George Louis Raja ◽  
F. Sagayaraj Francis ◽  
P. Sugumar

The existing semantic methods cluster the documents based on unabridged or abridged term comparisons. After clustering, these terms are not preserved, costing the cluster operation to be repeated in its entirety upon the arrival of new documents. Hence the semantic clustering methods can be considered as “on the go” methods. Re-clustering becomes unavoidable in all circumstances both in the Iterative and Incremental Clustering Methods. It would be more appropriate to build and evolve a lexicon with the derived keywords of the documents and to refer them in further cluster operations. The rationale is to deny re-clustering upon new documents and refer the Lexicon to formulate clusters until the quality of clusters is intact, and when it breaks above the threshold, the cluster operation can be repeated. Since re-clustering is delayed until a breakeven point, the process of re-clustering becomes faster. This process may incur additional runtime complexity, but would extremely simplify and speed up the process of re-clustering. This paper discusses about the construction of lexicons and its applications in clustering. The Keyword based Lexicon Construction Algorithm (KBLCA) is demonstrated to build lexicons and the breakeven point for re-clustering is proposed and described. The theory of denying re-clustering is briefed, along with experimental results.


Author(s):  
Marlene Goncalves ◽  
María Esther Vidal

Criteria that induce a Skyline naturally represent user’s preference conditions useful to discard irrelevant data in large datasets. However, in the presence of high-dimensional Skyline spaces, the size of the Skyline can still be very large. To identify the best k points among the Skyline, the Top-k Skyline approach has been proposed. This chapter describes existing solutions and proposes to use the TKSI algorithm for the Top-k Skyline problem. TKSI reduces the search space by computing only a subset of the Skyline that is required to produce the top-k objects. In addition, the Skyline Frequency Metric is implemented to discriminate among the Skyline objects those that best meet the multidimensional criteria. This chapter’s authors have empirically studied the quality of TKSI, and their experimental results show the TKSI may be able to speed up the computation of the Top-k Skyline in at least 50% percent with regard to the state-of-the-art solutions.


Author(s):  
Steven A. Schmied ◽  
Jonathan R. Binns ◽  
Martin R. Renilson ◽  
Giles A. Thomas ◽  
Gregor J. Macfarlane ◽  
...  

In this paper, a novel idea to produce continuous breaking waves is discussed, whereby a pressure source is rotated within an annular wave pool. The concept is that the inner ring of the annulus has a sloping bathymetry to induce wave breaking from the wake of the pressure source. In order to refine the technique, work is being conducted to better understand the mechanics of surfable waves generated by moving pressure sources in restricted water. This paper reports on the first stage of an experimental investigation of a novel method for generating continuously surfable waves utilising a moving pressure source. The aim was to measure and assess the waves generated by two parabolic pressure sources and a wavedozer [1] for their suitability for future development of continuous breaking surfable waves. The tests were conducted at the Australian Maritime College (AMC), University of Tasmania (UTas) 100 metre long towing tank. The experimental results as variations in wave height (H) divided by water depth (h) as functions of depth Froude number (Frh) and h, together with predictions from both methods, are presented in this paper. Finally, measures of the wave making energy efficiency of each pressure source, and the surfable quality of the waves generated by it, were developed and are presented.


2011 ◽  
Vol 474-476 ◽  
pp. 771-776
Author(s):  
Guo Quan Zhang ◽  
Zhan Ming Li

Aims at the problem that the threshold number and value are difficulty to determine automatically existing in multi-threshold color image segmentation method, a novel method of multi-threshold segmentation in HSV is proposed. First of all, the image is pre-processed in HSV, component H and V is projected to S and be quantified at the same time. Secondly, histogram and advanced Histon histogram (AHH) are constructed. According to concept of roughness in the theory of Rough Set, the histogram of roughness (RSH) is constructed. Finally, according to requirement of segmentation accuracy, set a threshold Hn on RSH to determine the number and scope of multi-threshold and the image is segmented with above thresholds. The experimental results show that this method can determine the threshold quantity automatically, segment image efficiently and robust against illumination variation.


2013 ◽  
Vol 427-429 ◽  
pp. 1836-1840 ◽  
Author(s):  
Yong Zhuo Wu ◽  
Zhen Tu ◽  
Lei Liu

Iamge repair using the digital image processing technology has become a new research point in computer application. A novel method of local statistic enhancement based on genetic algorithm is proposed in this paper for the image enhancement. The modified amplified function are used as the jugement criterion, and the optimal paremeters are searched by the genetic algorithm. Experimental results show that the quality of images is improved dramatically by using this method.


Symmetry ◽  
2019 ◽  
Vol 11 (1) ◽  
pp. 67 ◽  
Author(s):  
Shan Bian ◽  
Haoliang Li ◽  
Tianji Gu ◽  
Alex Chichung Kot

The analysis of video compression history is one of the important issues in video forensics. It can assist forensics analysts in many ways, e.g., to determine whether a video is original or potentially tampered with, or to evaluate the real quality of a re-encoded video, etc. In the existing literature, however, there are very few works targeting videos in HEVC format (the most recent standard), especially for the issue of the detection of transcoded videos. In this paper, we propose a novel method based on the statistics of Prediction Units (PUs) to detect transcoded HEVC videos from AVC format. According to the analysis of the footprints of HEVC videos, the frequencies of PUs (whether in symmetric patterns or not) are distinguishable between original HEVC videos and transcoded ones. The reason is that previous AVC encoding disturbs the PU partition scheme of HEVC. Based on this observation, a 5D and a 25D feature set are extracted from I frames and P frames, respectively, and are combined to form the proposed 30D feature set, which is finally fed to an SVM classifier. To validate the proposed method, extensive experiments are conducted on a dataset consisting of CIF ( 352 × 288 ) and HD 720p videos with a diversity of bitrates and different encoding parameters. Experimental results show that the proposed method is very effective at detecting transcoded HEVC videos and outperforms the most recent work.


Sign in / Sign up

Export Citation Format

Share Document