Evaluation of Convex Optimization Techniques for the Weighted Graph-Matching Problem in Computer Vision

Author(s):  
Christian Schellewald ◽  
Stefan Roth ◽  
Christoph Schnörr
Author(s):  
Shiyu Chen ◽  
Xiuxiao Yuan ◽  
Wei Yuan ◽  
Yang Cai

Image matching lies at the heart of photogrammetry and computer vision. For poor textural images, the matching result is affected by low contrast, repetitive patterns, discontinuity or occlusion, few or homogeneous textures. Recently, graph matching became popular for its integration of geometric and radiometric information. Focused on poor textural image matching problem, it is proposed an edge-weight strategy to improve graph matching algorithm. A series of experiments have been conducted including 4 typical landscapes: Forest, desert, farmland, and urban areas. And it is experimentally found that our new algorithm achieves better performance. Compared to SIFT, doubled corresponding points were acquired, and the overall recall rate reached up to 68%, which verifies the feasibility and effectiveness of the algorithm.


Author(s):  
Shiyu Chen ◽  
Xiuxiao Yuan ◽  
Wei Yuan ◽  
Yang Cai

Image matching lies at the heart of photogrammetry and computer vision. For poor textural images, the matching result is affected by low contrast, repetitive patterns, discontinuity or occlusion, few or homogeneous textures. Recently, graph matching became popular for its integration of geometric and radiometric information. Focused on poor textural image matching problem, it is proposed an edge-weight strategy to improve graph matching algorithm. A series of experiments have been conducted including 4 typical landscapes: Forest, desert, farmland, and urban areas. And it is experimentally found that our new algorithm achieves better performance. Compared to SIFT, doubled corresponding points were acquired, and the overall recall rate reached up to 68%, which verifies the feasibility and effectiveness of the algorithm.


Author(s):  
Siva Reddy ◽  
Mirella Lapata ◽  
Mark Steedman

In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the Free917 and WebQuestions benchmark datasets show our semantic parser improves over the state of the art.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Nitish Das ◽  
P. Aruna Priya

The mathematical model for designing a complex digital system is a finite state machine (FSM). Applications such as digital signal processing (DSP) and built-in self-test (BIST) require specific operations to be performed only in the particular instances. Hence, the optimal synthesis of such systems requires a reconfigurable FSM. The objective of this paper is to create a framework for a reconfigurable FSM with input multiplexing and state-based input selection (Reconfigurable FSMIM-S) architecture. The Reconfigurable FSMIM-S architecture is constructed by combining the conventional FSMIM-S architecture and an optimized multiplexer bank (which defines the mode of operation). For this, the descriptions of a set of FSMs are taken for a particular application. The problem of obtaining the required optimized multiplexer bank is transformed into a weighted bipartite graph matching problem where the objective is to iteratively match the description of FSMs in the set with minimal cost. As a solution, an iterative greedy heuristic based Hungarian algorithm is proposed. The experimental results from MCNC FSM benchmarks demonstrate a significant speed improvement by 30.43% as compared with variation-based reconfigurable multiplexer bank (VRMUX) and by 9.14% in comparison with combination-based reconfigurable multiplexer bank (CRMUX) during field programmable gate array (FPGA) implementation.


2020 ◽  
Vol 20 (18) ◽  
pp. 1582-1592 ◽  
Author(s):  
Carlos Garcia-Hernandez ◽  
Alberto Fernández ◽  
Francesc Serratosa

Background: Graph edit distance is a methodology used to solve error-tolerant graph matching. This methodology estimates a distance between two graphs by determining the minimum number of modifications required to transform one graph into the other. These modifications, known as edit operations, have an edit cost associated that has to be determined depending on the problem. Objective: This study focuses on the use of optimization techniques in order to learn the edit costs used when comparing graphs by means of the graph edit distance. Methods: Graphs represent reduced structural representations of molecules using pharmacophore-type node descriptions to encode the relevant molecular properties. This reduction technique is known as extended reduced graphs. The screening and statistical tools available on the ligand-based virtual screening benchmarking platform and the RDKit were used. Results: In the experiments, the graph edit distance using learned costs performed better or equally good than using predefined costs. This is exemplified with six publicly available datasets: DUD-E, MUV, GLL&GDD, CAPST, NRLiSt BDB, and ULS-UDS. Conclusion: This study shows that the graph edit distance along with learned edit costs is useful to identify bioactivity similarities in a structurally diverse group of molecules. Furthermore, the target-specific edit costs might provide useful structure-activity information for future drug-design efforts.


Biomolecules ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 1773
Author(s):  
Bahareh Behkamal ◽  
Mahmoud Naghibzadeh ◽  
Mohammad Reza Saberi ◽  
Zeinab Amiri Tehranizadeh ◽  
Andrea Pagnani ◽  
...  

Cryo-electron microscopy (cryo-EM) is a structural technique that has played a significant role in protein structure determination in recent years. Compared to the traditional methods of X-ray crystallography and NMR spectroscopy, cryo-EM is capable of producing images of much larger protein complexes. However, cryo-EM reconstructions are limited to medium-resolution (~4–10 Å) for some cases. At this resolution range, a cryo-EM density map can hardly be used to directly determine the structure of proteins at atomic level resolutions, or even at their amino acid residue backbones. At such a resolution, only the position and orientation of secondary structure elements (SSEs) such as α-helices and β-sheets are observable. Consequently, finding the mapping of the secondary structures of the modeled structure (SSEs-A) to the cryo-EM map (SSEs-C) is one of the primary concerns in cryo-EM modeling. To address this issue, this study proposes a novel automatic computational method to identify SSEs correspondence in three-dimensional (3D) space. Initially, through a modeling of the target sequence with the aid of extracting highly reliable features from a generated 3D model and map, the SSEs matching problem is formulated as a 3D vector matching problem. Afterward, the 3D vector matching problem is transformed into a 3D graph matching problem. Finally, a similarity-based voting algorithm combined with the principle of least conflict (PLC) concept is developed to obtain the SSEs correspondence. To evaluate the accuracy of the method, a testing set of 25 experimental and simulated maps with a maximum of 65 SSEs is selected. Comparative studies are also conducted to demonstrate the superiority of the proposed method over some state-of-the-art techniques. The results demonstrate that the method is efficient, robust, and works well in the presence of errors in the predicted secondary structures of the cryo-EM images.


2021 ◽  
Author(s):  
Shadi Sadeghpour Kharkan

In this thesis, we present a cache placement scheme to deal with backhaul link constraint in Small Cell Network for 5G wireless network. We formulated the cache placement problem as a graph matching problem and presented an optimal file-helper matching algorithm. We defined stability criterion for the matching and found that our matching solution is stable in the sense that every helper finds at least one file to cache given that no file exceed minimum cache size. We achieved a unique placement of a file within a cluster of helpers to increase the number of files cached within a cluster. Further, our experimental evaluation demonstrates that our algorithm increases local and neighbor hit ratios as compared to a random placement, which in turn significantly decreases the traffic that goes over the backhaul bottleneck link.


Sign in / Sign up

Export Citation Format

Share Document