scholarly journals CGCI-SIFT: A More Efficient and Compact Representation of Local Descriptor

2013 ◽  
Vol 13 (3) ◽  
pp. 132-141 ◽  
Author(s):  
Dongliang Su ◽  
Jian Wu ◽  
Zhiming Cui ◽  
Victor S. Sheng ◽  
Shengrong Gong

This paper proposes a novel invariant local descriptor, a combination of gradient histograms with contrast intensity (CGCI), for image matching and object recognition. Considering the different contributions of sub-regions inside a local interest region to an interest point, we divide the local interest region around the interest point into two main sub-regions: an inner region and a peripheral region. Then we describe the divided regions with gradient histogram information for the inner region and contrast intensity information for the peripheral region respectively. The contrast intensity information is defined as intensity difference between an interest point and other pixels in the local region. Our experimental results demonstrate that the proposed descriptor performs better than SIFT and its variants PCA-SIFT and SURF with various optical and geometric transformations. It also has better matching efficiency than SIFT and its variants PCA-SIFT and SURF, and has the potential to be used in a variety of realtime applications.

1985 ◽  
Vol 1 (2) ◽  
pp. 105-110 ◽  
Author(s):  
A. J. Dutt

This paper deals with the investigation of wind loading on the pyramidal roof structure of the Church of St Michael in Newton, Wirral, Cheshire, England, by wind tunnel tests on a 1/48 scale model. The roof of the model was flat in the peripheral region of the building while in the inner region there was a grouping of four pyramidal roofs. Wind tunnel experiments were carried out; wind pressure distribution and contours of wind pressure on all surfaces of the pyramid roofs were determined for four principal wind directions. The average suctions on the roof were evaluated. The highest point suction encountered was — 4q whilst the maximum average suction on the roof was —0·86q. The results obtained from wind tunnel tests were used for the design of pyramidal roof structures and roof coverings for which localised high suctions were very significant.


1984 ◽  
Vol 62 (2) ◽  
pp. 396-401 ◽  
Author(s):  
James W. Perry ◽  
Ray F. Evert

Microsclerotia of Verticillium dahliae in potato (Solatium tuberosum) roots were examined, primarily with the transmission electron microscope. These resting structures were found in all tissues except the xylem, even though the tracheary elements of some microsclerotium-containing roots were infected. Microsclerotia were occasionally present within 9 days of inoculation, but they became more numerous with time. Usually, the microsclerotia consisted of a peripheral region composed of degenerate cells and cells with cytoplasmic contents of moderate electron density and an inner region of cells with very electron-dense cytoplasm. Melanin coated all cells but was more abundant over cytoplasm-containing cells. Roots containing large microsclerotia were moribund. Several living roots contained small microsclerotia, some of which produced penetration hyphae. Living host cells responded to attempted penetrations by producing lignitubers which ensheathed the penetration hyphae, as was previously described by the authors.


1979 ◽  
Vol 69 (1) ◽  
pp. 271-288
Author(s):  
Jon Berger ◽  
Duncan Carr Agnew ◽  
Robert L. Parker ◽  
William E. Farrell

abstract We present a rapid and accurate method of calibrating seismic systems using a random binary calibration signal and cross-spectral techniques. The complex transfer function obtained from the cross spectrum is least-squares fit to the ratio of two polynomials in s(s = iω) whose degrees are determined by a linear systems analysis. This provides a compact representation of the system frequency response. We demonstrate its application to two seismic systems, the IDA and SRO seismomenters. This method yields calibrations to an accuracy of better than 1 per cent in amplitude and 1° in phase.


2021 ◽  
pp. 2150063
Author(s):  
Nan Jiang ◽  
Zhuoxiao Ji ◽  
Hong Li ◽  
Jian Wang

With the development of quantum computing, the application of it to image processing has lots of advantages compared to classical image processing. In this paper, we propose a scheme to extract the interest point in quantum images. Interest point is a kind of feature point which can help to identify the target object in the image. Our scheme is based on the idea of Luminance Contrast (LC) algorithm. The scheme computes the absolute value of gray level differences between a pixel and the others, and then adds all these differences together. The sum is defined as a saliency. After computing the saliency of every pixel, we label the pixels with the maximal saliency as the interest points. The algorithm has pretty good performance and its time complexity is much better than the classical algorithm in same conditions, which provides a new idea for the extraction of image interest point.


2020 ◽  
Vol 27 (1) ◽  
Author(s):  
MB Ibrahim ◽  
KA Gbolagade

The science and art of data compression is presenting information in a compact form. This compact representation of information is generated by recognizing the use of structures that exist in the data. The Lempel-Ziv-Welch (LZW) algorithm is known to be one of the best compressors of text which achieve a high degree of compression. This is possible for text files with lots of redundancies. Thus, the greater the redundancies, the greater the compression achieved. In this paper, the LZW algorithm is further enhanced to achieve a higher degree of compression without compromising its performances through the introduction of an algorithm, called Chinese Remainder Theorem (CRT), is presented. Compression Time and Compression Ratio was used for performance metrics. Simulations was carried out using MATLAB for five (5) text files (of varying sizes) in determining the efficiency of the proposed CRT-LZW technique. This new technique has opened a new development of increasing the speed of compressing data than the traditional LZW. The results show that the CRT-LZW performs better than LZW in terms of computational time by 0.12s to 15.15s, while the compression ratio remains same with 2.56% respectively. The proposed compression time also performed better than some investigative papers implementing LZW-RNS by 0.12s to 2.86s and another by 0.12s to 0.14s. Keywords: Data Compression, Lempel-Ziv-Welch (LZW) algorithm, Enhancement, Chinese Remainder Theorem (CRT), Text files.


Geomatics ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 287-309
Author(s):  
Ankit Patel ◽  
Yi-Ting Cheng ◽  
Radhika Ravi ◽  
Yi-Chun Lin ◽  
Darcy Bullock ◽  
...  

Recently, light detection and ranging (LiDAR)-based mobile mapping systems (MMS) have been utilized for extracting lane markings using deep learning frameworks. However, huge datasets are required for training neural networks. Furthermore, with accurate lane markings being detected utilizing LiDAR data, an algorithm for automatically reporting their intensity information is beneficial for identifying worn-out or missing lane markings. In this paper, a transfer learning approach based on fine-tuning of a pretrained U-net model for lane marking extraction and a strategy for generating intensity profiles using the extracted results are presented. Starting from a pretrained model, a new model can be trained better and faster to make predictions on a target domain dataset with only a few training examples. An original U-net model trained on two-lane highways (source domain dataset) was fine-tuned to make accurate predictions on datasets with one-lane highway patterns (target domain dataset). Specifically, encoder- and decoder-trained U-net models are presented wherein, during retraining of the former, only weights in the encoder path of U-net were allowed to change with decoder weights frozen and vice versa for the latter. On the test data (target domain), the encoder-trained model (F1-score: 86.9%) outperformed the decoder-trained (F1-score: 82.1%). Additionally, on an independent dataset, the encoder-trained one (F1-score: 90.1%) performed better than the decoder-trained one (F1-score: 83.2%). Lastly, on the basis of lane marking results obtained from the encoder-trained U-net, intensity profiles were generated. Such profiles can be used to identify lane marking gaps and investigate their cause through RGB imagery visualization.


2015 ◽  
Vol 1083 ◽  
pp. 142-147
Author(s):  
Luan Zeng ◽  
You Zhai

In order to improve the robustness performance of SURF descriptor applied to stereo image matching, a new matching method is proposed. By using the ratio of minimum to second min Euclidean distance of corresponding features, we can get the coarse matching points aggregation. Then, the epipolar line is computed from calibration parameters. Correspondences are taken as correct correspondences, only if they fall into a small neighborhood of their epipolar line. Taken errors into account, the neighborhood is set (-3, 3). Using this restriction, we can get the correct matching points aggregation. The experimental results show that the correct matches and matching efficiency are better than RANSAC.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Wenpeng Gao ◽  
Xiaoguang Chen ◽  
Yili Fu ◽  
Minwei Zhu

The centerline, as a simple and compact representation of object shape, has been used to analyze variations of the human callosal shape. However, automatic extraction of the callosal centerline remains a sophisticated problem. In this paper, we propose a method of automatic extraction of the callosal centerline from segmented mid-sagittal magnetic resonance (MR) images. A model-based point matching method is introduced to localize the anterior and posterior endpoints of the centerline. The model of the endpoint is constructed with a statistical descriptor of the shape context. Active contour modeling is adopted to drive the curve with the fixed endpoints to approximate the centerline using the gradient of the distance map of the segmented corpus callosum. Experiments with 80 segmented mid-sagittal MR images were performed. The proposed method is compared with a skeletonization method and an interactive method in terms of recovery error and reproducibility. Results indicate that the proposed method outperforms skeletonization and is comparable with and sometimes better than the interactive method.


1976 ◽  
Vol 4 (2) ◽  
pp. 138-143 ◽  
Author(s):  
Albert Cohen ◽  
Carlos M Hernandez

Two dose-levels of nefopam hydrochloride ( i.e. 30 mg and 60 mg) were compared with two dose-levels of aspirin ( i.e. 300 mg and 600 mg) and placebo in 125 male patients having pain associated with muscle disorders. Drugs were given as a single dose and pain intensity and side-effects monitored at thirty minutes and then hourly for four hours. Time-course action of the drugs revealed that aspirin 300 mg failed to achieve statistically significant analgesia at any post-treatment observation, whereas nefopam 60 mg was significantly better than placebo ( p < 0·05) at one and three hours in terms of pain intensity and at one hour in terms of pain intensity difference scores. Aspirin 600 mg was significantly different from placebo ( p < 0·05) at all hourly observations for both efficacy parameters, as was nefopam 30 mg ( p < 0·01). Summation of pain intensity difference scores showed aspirin 600 mg and nefopam 30 mg to be significantly different from placebo at the 0·025 and 0·005 levels respectively.


2015 ◽  
Vol 2015 ◽  
pp. 1-16
Author(s):  
Min Mao ◽  
Kuang-Rong Hao ◽  
Yong-Sheng Ding

Since the image feature points are always gathered at the range with significant intensity change, such as textured portions or edges of an image, which can be detected by the state-of-the-art intensity based point-detectors, there is nearly no point in the areas of low textured detected by classical interest-point detectors. In this paper we describe a novel algorithm based on affine transform and graph cut for interest point detecting and matching from wide baseline image pairs with weakly textured object. The detection and matching mechanism can be separated into three steps: firstly, the information on the large textureless areas will be enhanced by adding textures through the proposed texture synthesis algorithm TSIQ. Secondly, the initial interest-point set is detected by classical interest-point detectors. Finally, graph cuts are used to find the globally optimal set of matching points on stereo pairs. The efficacy of the proposed algorithm is verified by three kinds of experiments, that is, the influence of point detecting from synthetic texture with different texture sample, the stability under the different geometric transformations, and the performance to improve the quasi-dense matching algorithm, respectively.


Sign in / Sign up

Export Citation Format

Share Document