scholarly journals Vehicle Reidentification via Multifeature Hypergraph Fusion

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Wang Li ◽  
Zhang Yong ◽  
Yuan Wei ◽  
Shi Hongxing

Vehicle reidentification refers to the mission of matching vehicles across nonoverlapping cameras, which is one of the critical problems of the intelligent transportation system. Due to the resemblance of the appearance of the vehicles on road, traditional methods could not perform well on vehicles with high similarity. In this paper, we utilize hypergraph representation to integrate image features and tackle the issue of vehicles re-ID via hypergraph learning algorithms. A feature descriptor can only extract features from a single aspect. To merge multiple feature descriptors, an efficient and appropriate representation is particularly necessary, and a hypergraph is naturally suitable for modeling high-order relationships. In addition, the spatiotemporal correlation of traffic status between cameras is the constraint beyond the image, which can greatly improve the re-ID accuracy of different vehicles with similar appearances. The method proposed in this paper uses hypergraph optimization to learn about the similarity between the query image and images in the library. By using the pair and higher-order relationship between query objects and image library, the similarity measurement method is improved compared to direct matching. The experiments conducted on the image library constructed in this paper demonstrates the effectiveness of using multifeature hypergraph fusion and the spatiotemporal correlation model to address issues in vehicle reidentification.

2013 ◽  
Vol 694-697 ◽  
pp. 2336-2340
Author(s):  
Yun Feng Yang ◽  
Feng Xian Tang

In order to construct a certain standard structure MRI (Magnetic resonance imaging) image library by extracting and collating unstructured literature data information, an identification method of the image and text information fusion is proposed. The method makes use of PHOW (Pyramid Histogram Of Words) to represent image features, combines with the word frequency characteristics of the embedded icon note (text), and then uses posterior multiplication fusion method to complete the classification and identification of the online biological literature MRI image. The experimental results show that this method has better correct recognition rate and better recognition performance than feature identification method only based on PHOW or text. The study can offer use for reference to construct other structured professional database from online literature.


2021 ◽  
Vol 8 (7) ◽  
pp. 97-105
Author(s):  
Ali Ahmed ◽  
◽  
Sara Mohamed ◽  

Content-Based Image Retrieval (CBIR) systems retrieve images from the image repository or database in which they are visually similar to the query image. CBIR plays an important role in various fields such as medical diagnosis, crime prevention, web-based searching, and architecture. CBIR consists mainly of two stages: The first is the extraction of features and the second is the matching of similarities. There are several ways to improve the efficiency and performance of CBIR, such as segmentation, relevance feedback, expansion of queries, and fusion-based methods. The literature has suggested several methods for combining and fusing various image descriptors. In general, fusion strategies are typically divided into two groups, namely early and late fusion strategies. Early fusion is the combination of image features from more than one descriptor into a single vector before the similarity computation, while late fusion refers either to the combination of outputs produced by various retrieval systems or to the combination of different rankings of similarity. In this study, a group of color and texture features is proposed to be used for both methods of fusion strategies. Firstly, an early combination of eighteen color features and twelve texture features are combined into a single vector representation and secondly, the late fusion of three of the most common distance measures are used in the late fusion stage. Our experimental results on two common image datasets show that our proposed method has good performance retrieval results compared to the traditional way of using single features descriptor and also has an acceptable retrieval performance compared to some of the state-of-the-art methods. The overall accuracy of our proposed method is 60.6% and 39.07% for Corel-1K and GHIM-10K ‎datasets, respectively.


2021 ◽  
pp. 6787-6794
Author(s):  
Anisha Rebinth, Dr. S. Mohan Kumar

An automated Computer Aided Diagnosis (CAD) system for glaucoma diagnosis using fundus images is developed. The various glaucoma image classification schemes using the supervised and unsupervised learning approaches are reviewed. The research paper involves three stages of glaucoma disease diagnosis. First, the pre-processing stage the texture features of the fundus image is recorded with a two-dimensional Gabor filter at various sizes and orientations. The image features are generated using higher order statistical characteristics, and then Principal Component Analysis (PCA) is used to select and reduce the dimension of the image features. For the performance study, the Gabor filter based features are extracted from the RIM-ONE and HRF database images, and then Support Vector Machine (SVM) classifier is used for classification. Final stage utilizes the SVM classifier with the Radial Basis Function (RBF) kernel learning technique for the efficient classification of glaucoma disease with accuracy 90%.


Author(s):  
Siddhivinayak Kulkarni

Developments in technology and the Internet have led to an increase in number of digital images and videos. Thousands of images are added to WWW every day. Content based Image Retrieval (CBIR) system typically consists of a query example image, given by the user as an input, from which low-level image features are extracted. These low level image features are used to find images in the database which are most similar to the query image and ranked according their similarity. This chapter evaluates various CBIR techniques based on fuzzy logic and neural networks and proposes a novel fuzzy approach to classify the colour images based on their content, to pose a query in terms of natural language and fuse the queries based on neural networks for fast and efficient retrieval. A number of experiments were conducted for classification, and retrieval of images on sets of images and promising results were obtained.


2020 ◽  
Vol 32 (6) ◽  
pp. 821-835
Author(s):  
Jing Luo

With the popularization of intelligent transportation system and Internet of vehicles, the traffic flow data on the urban road network can be more easily obtained in large quantities. This provides data support for shortterm traffic flow prediction based on real-time data. Of all the challenges and difficulties faced in the research of short-term traffic flow prediction, this paper intends to address two: one is the difficulty of short-term traffic flow prediction caused by spatiotemporal correlation of traffic flow changes between upstream and downstream intersections; the other is the influence of deviation of traffic flow caused by abnormal conditions on short-term traffic flow prediction. This paper proposes a Bayesian network short-term traffic flow prediction method based on quantile regression. By this method the trouble caused by spatiotemporal correlation of traffic flow prediction could be effectively and efficiently solved. At the same time, the prediction of traffic flow change under abnormal conditions has higher accuracy.


Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 847 ◽  
Author(s):  
Dong Zhang ◽  
Lindsey Ann Raven ◽  
Dah-Jye Lee ◽  
Meng Yu ◽  
Alok Desai

Finding corresponding image features between two images is often the first step for many computer vision algorithms. This paper introduces an improved synthetic basis feature descriptor algorithm that describes and compares image features in an efficient and discrete manner with rotation and scale invariance. It works by performing a number of similarity tests between the feature region surrounding the feature point and a predetermined number of synthetic basis images to generate a feature descriptor that uniquely describes the feature region. Features in two images are matched by comparing their descriptors. By only storing the similarity of the feature region to each synthetic basis image, the overall storage size is greatly reduced. In short, this new binary feature descriptor is designed to provide high feature matching accuracy with computational simplicity, relatively low resource usage, and a hardware friendly design for real-time vision applications. Experimental results show that our algorithm produces higher precision rates and larger number of correct matches than the original version and other mainstream algorithms and is a good alternative for common computer vision applications. Two applications that often have to cope with scaling and rotation variations are included in this work to demonstrate its performance.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 391
Author(s):  
Dah-Jye Lee ◽  
Samuel G. Fuller ◽  
Alexander S. McCown

Feature detection, description, and matching are crucial steps for many computer vision algorithms. These steps rely on feature descriptors to match image features across sets of images. Previous work has shown that our SYnthetic BAsis (SYBA) feature descriptor can offer superior performance to other binary descriptors. This paper focused on various optimizations and hardware implementation of the newer and optimized version. The hardware implementation on a field-programmable gate array (FPGA) is a high-throughput low-latency solution which is critical for applications such as high-speed object detection and tracking, stereo vision, visual odometry, structure from motion, and optical flow. We compared our solution to other hardware designs of binary descriptors. We demonstrated that our implementation of SYBA as a feature descriptor in hardware offered superior image feature matching performance and used fewer resources than most binary feature descriptor implementations.


Author(s):  
Harikrishna G. N. Rai ◽  
K Sai Deepak ◽  
P. Radha Krishna

Multi-modal and Unstructured nature of documents make their retrieval from healthcare document repositories a challenging task. Text based retrieval is the conventional approach used for solving this problem. In this paper, the authors explore an alternate avenue of using embedded figures for the retrieval task. Usually, context of a document is directly reflected in the associated figures, therefore embedded text within these figures along with image features have been used for similarity based retrieval of figures. The present work demonstrates that image features describing the structural properties of figures are sufficient for the figure retrieval task. First, the authors analyze the problem of figure retrieval from biomedical literature and identify significant classes of figures. Second, they use edge information as a means to discriminate between structural properties of each figure category. Finally, the authors present a methodology using a novel feature descriptor namely Fourier Edge Orientation Autocorrelogram (FEOAC) to describe structural properties of figures and build an effective Biomedical document retrieval system. The experimental results demonstrate the better retrieval performance and overall improvement of FEOAC for figure retrieval task, especially when most of the edge information is retained. Apart from invariance to scale, rotation and non-uniform illumination, the proposed feature descriptor is shown to be relatively robust to noisy edges.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6235
Author(s):  
Chengyi Xu ◽  
Ying Liu ◽  
Fenglong Ding ◽  
Zilong Zhuang

Considering the difficult problem of robot recognition and grasping in the scenario of disorderly stacked wooden planks, a recognition and positioning method based on local image features and point pair geometric features is proposed here and we define a local patch point pair feature. First, we used self-developed scanning equipment to collect images of wood boards and a robot to drive a RGB-D camera to collect images of disorderly stacked wooden planks. The image patches cut from these images were input to a convolutional autoencoder to train and obtain a local texture feature descriptor that is robust to changes in perspective. Then, the small image patches around the point pairs of the plank model are extracted, and input into the trained encoder to obtain the feature vector of the image patch, combining the point pair geometric feature information to form a feature description code expressing the characteristics of the plank. After that, the robot drives the RGB-D camera to collect the local image patches of the point pairs in the area to be grasped in the scene of the stacked wooden planks, also obtaining the feature description code of the wooden planks to be grasped. Finally, through the process of point pair feature matching, pose voting and clustering, the pose of the plank to be grasped is determined. The robot grasping experiment here shows that both the recognition rate and grasping success rate of planks are high, reaching 95.3% and 93.8%, respectively. Compared with the traditional point pair feature method (PPF) and other methods, the method present here has obvious advantages and can be applied to stacked wood plank grasping environments.


2012 ◽  
Vol 24 (01) ◽  
pp. 27-36 ◽  
Author(s):  
Mana Tarjoman ◽  
Emad Fatemizadeh ◽  
Kambiz Badie

Content-based image retrieval (CBIR) has turned into an important and active potential research field with the advance of multimedia and imaging technology. It makes use of image features, such as color, texture and shape, to index images with minimal human intervention. A CBIR system can be used to locate medical images in large databases. In this paper we propose a CBIR system which describes the methodology for retrieving digital human brain magnetic resonance images (MRI) based on textural features and the Adaptive neuro-fuzzy inference system (ANFIS) learning to retrieve similar images from database in two categories: normal and tumoral. A fuzzy classifier has been used, because of the uncertainty in the results of classifier and capacity of learning. ANFIS is a good candidate for our categorization problem. Our proposed CBIR system can locate a query image in the category of normal or tumoral images in the online retrieval part. Finally, using a relevance feedback, we improve the effectiveness of our retrieval system. This research uses the knowledge of the CBIR approach to the application of medical decision support and discrimination between the normal and abnormal medical images based on features. We present and compare the results of the proposed method with the CBIR systems used in recent works. The experimental results indicate that the proposed method is reliable and has high image retrieval efficiency compared with the previous works.


Sign in / Sign up

Export Citation Format

Share Document