scholarly journals A novel image inpainting framework based on multilevel image pyramids

2021 ◽  
Vol 41 (1) ◽  
Author(s):  
Md. Salman Bombaywala ◽  
Chirag Paunwala

Image inpainting is the art of manipulating an image so that it is visually unrecognizable way. A considerable amount of research has been done in this area over the last few years. However, the state of art techniques does suffer from computational complexities and plausible results. This paper proposes a multi-level image pyramid-based image inpainting algorithm. The image inpainting algorithm starts with the coarsest level of the image pyramid and overpainting information is transferred to the subsequent levels until the bottom level gets inpainted. The search strategy used in the algorithm is based on hashing the coherent information in an image which makes the search fast and accurate. Also, the search space is constrained based on the propagated information thereby reducing the complexity of the algorithm. Compared to other inpainting methods; the proposed algorithm inpaints the target region with better plausibility and human vision conformation. Experimental results show that the proposed algorithm achieves better results as compared to other inpainting techniques.

2021 ◽  
Author(s):  
Feiyang Ren ◽  
Yi Han ◽  
Shaohan Wang ◽  
He Jiang

Abstract A novel marine transportation network based on high-dimensional AIS data with a multi-level clustering algorithm is proposed to discover important waypoints in trajectories based on selected navigation features. This network contains two parts: the calculation of major nodes with CLIQUE and BIRCH clustering methods and navigation network construction with edge construction theory. Unlike the state-of-art work for navigation clustering with only ship coordinate, the proposed method contains more high-dimensional features such as drafting, weather, and fuel consumption. By comparing the historical AIS data, more than 220,133 lines of data in 30 days were used to extract 440 major nodal points in less than 4 minutes with ordinary PC specs (i5 processer). The proposed method can be performed on more dimensional data for better ship path planning or even national economic analysis. Current work has shown good performance on complex ship trajectories distinction and great potential for future shipping transportation market analytical predictions.


Author(s):  
Ehsan Ehsaeyan ◽  
Alireza Zolghadrasli

Multilevel thresholding is a basic method in image segmentation. The conventional image multilevel thresholding algorithms are computationally expensive when the number of decomposed segments is high. In this paper, a novel and powerful technique is suggested for Crow Search Algorithm (CSA) devoted to segmentation applications. The main contribution of our work is to adapt Darwinian evolutionary theory with heuristic CSA. First, the population is divided into specified groups and each group tries to find better location in the search space. A policy of encouragement and punishment is set on searching agents to avoid being trapped in the local optimum and premature solutions. Moreover, to increase the convergence rate of the proposed method, a gray-scale map is applied to out-boundary agents. Ten test images are selected to measure the ability of our algorithm, compared with the famous procedure, energy curve method. Two popular entropies i.e. Otsu and Kapur are employed to evaluate the capability of the introduced algorithm. Eight different search algorithms are implemented and compared to the introduced method. The obtained results show that our method, compared with the original CSA, and other heuristic search methods, can extract multi-level thresholding more efficiently.


Author(s):  
Nacéra Bennacer ◽  
Guy Vidal-Naquet

This paper proposes an Ontology-driven and Community-based Web Services (OCWS) framework which aims at automating discovery, composition and execution of web services. The purpose is to validate and to execute a user’s request built from the composition of a set of OCWS descriptions and a set of user constraints. The defined framework separates clearly the OCWS external descriptions from internal realistic implementations of e-services. It identifies three levels: the knowledge level, the community level and e-services level and uses different participant agents deployed in a distributed architecture. First, the reasoner agent uses a description logic extended for actions in order to reason about: (i) consistency of the pre-conditions and post-conditions of OCWS descriptions and the user constraints with ontologies semantics, (ii) consistency of the workflow matching assertions and the execution dependency graph. Then the execution plan model is generated automatically to be run by the composer agents using the dynamic execution plan algorithm (DEPA), according to the workflow matching and the established execution order. The community composer agents invoke the appropriate e-services and ensure that the non functional constraints are satisfied. DEPA algorithm works dynamically without a priori information about e-services states and has interesting properties such as taking into account the non-determinism of e-services and reducing the search space.


Author(s):  
Fergal McGrath ◽  
Rebecca Purcell

This chapter introduces external knowledge search strategy as a central element of an organizations overall knowledge management strategy. The argument cites how knowledge management has developed around a myopic internal focus and has thus far failed to take full account of the many sources of knowledge external to the organization. The chapter offers external knowledge search strategy as a means of integrating this external focus into knowledge management understanding, by providing a conceptual framework for organizations involved in the external knowledge management activity of external knowledge search. The framework identifies 10 search paths organizations may follow into the search space, four of which relate exclusively to external knowledge search. The authors hope that establishing an external element within knowledge management strategy will inform knowledge management’s recognition of the value of the extended enterprise.


Author(s):  
Lei Zhang ◽  
Minhui Chang

Abstract In the inpainting method for object removal, SSD (Sum of Squared Differences) is commonly used to measure the degree of similarity between the exemplar patch and the target patch, which has a very important impact on the restoration results. Although the matching rule is relatively simple, it is likely to lead to the occurrence of mismatch error. Even worse, the error may be accumulated along with the process continues. Finally some unexpected objects may be introduced into the target region, making the result unable to meet the requirements of visual consistency. In view of these problems, we propose an inpainting method for object removal based on difference degree constraint. Firstly, we define the MSD (Mean of Squared Differences) and use it to measure the degree of differences between corresponding pixels at known positions in the target patch and the exemplar patch. Secondly, we define the SMD (Square of Mean Differences) and use it to measure the degree of differences between the pixels at known positions in the target patch and the pixels at unknown positions in the exemplar patch. Thirdly, based on MSD and SMD, we define a new matching rule and use it to find the most similar exemplar patch in the source region. Finally, we use the exemplar patch to restore the target patch. Experimental results show that the proposed method can effectively prevent the occurrence of mismatch error and improve the restoration effect.


2008 ◽  
Vol 17 (02) ◽  
pp. 303-320 ◽  
Author(s):  
WEI SONG ◽  
BINGRU YANG ◽  
ZHANGYAN XU

Because of the inherent computational complexity, mining the complete frequent item-set in dense datasets remains to be a challenging task. Mining Maximal Frequent Item-set (MFI) is an alternative to address the problem. Set-Enumeration Tree (SET) is a common data structure used in several MFI mining algorithms. For this kind of algorithm, the process of mining MFI's can also be viewed as the process of searching in set-enumeration tree. To reduce the search space, in this paper, a new algorithm, Index-MaxMiner, for mining MFI is proposed by employing a hybrid search strategy blending breadth-first and depth-first. Firstly, the index array is proposed, and based on bitmap, an algorithm for computing index array is presented. By adding subsume index to frequent items, Index-MaxMiner discovers the candidate MFI's using breadth-first search at one time, which avoids first-level nodes that would not participate in the answer set and reduces drastically the number of candidate itemsets. Then, for candidate MFI's, depth-first search strategy is used to generate all MFI's. Thus, the jumping search in SET is implemented, and the search space is reduced greatly. The experimental results show that the proposed algorithm is efficient especially for dense datasets.


2014 ◽  
Vol 24 (4) ◽  
pp. 901-916
Author(s):  
Zoltán Ádám Mann ◽  
Tamás Szép

Abstract Backtrack-style exhaustive search algorithms for NP-hard problems tend to have large variance in their runtime. This is because “fortunate” branching decisions can lead to finding a solution quickly, whereas “unfortunate” decisions in another run can lead the algorithm to a region of the search space with no solutions. In the literature, frequent restarting has been suggested as a means to overcome this problem. In this paper, we propose a more sophisticated approach: a best-firstsearch heuristic to quickly move between parts of the search space, always concentrating on the most promising region. We describe how this idea can be efficiently incorporated into a backtrack search algorithm, without sacrificing optimality. Moreover, we demonstrate empirically that, for hard solvable problem instances, the new approach provides significantly higher speed-up than frequent restarting.


Sign in / Sign up

Export Citation Format

Share Document