An adaptive integrated rule-based algorithm for license plate localization

2012 ◽  
Vol 20 (4) ◽  
Author(s):  
C. Paunwala ◽  
S. Patnaik

AbstractThis paper addresses a license plate localization (LPL) algorithm for a complex background. Most of LPL algorithm works on restricted conditions, as well as on a principle of sequential elimination of blocks from image level to final LP candidate region. In most of algorithms, blocks are filtered out for not satisfying required LP features in a top-down approach and this may result in a poor efficiency in a complex scenario. The major steps of the proposed approach are adaptive edge mapping, saliency measure of edge based rules with confidence level estimation using fuzzy rules and final step for reassessment of decision by colour attributes filtering. The proposed algorithm is adaptive to across the country variations in LP standards, as well as it is tested on two data sets each one consisting of more than 700 images, set-1 being for good images while set-2 including only constrained images. The algorithm is tested for a low contrast due to overexposure or poor lighting, existence of multiple plates, variation in aspect ratio and compatible background conditions. It has been observed, that the performance degradation imposing complex condition is nominal.

Life ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 690
Author(s):  
Clifford F. Brunk ◽  
Charles R. Marshall

While most advances in the study of the origin of life on Earth (OoLoE) are piecemeal, tested against the laws of chemistry and physics, ultimately the goal is to develop an overall scenario for life’s origin(s). However, the dimensionality of non-equilibrium chemical systems, from the range of possible boundary conditions and chemical interactions, renders the application of chemical and physical laws difficult. Here we outline a set of simple criteria for evaluating OoLoE scenarios. These include the need for containment, steady energy and material flows, and structured spatial heterogeneity from the outset. The Principle of Continuity, the fact that all life today was derived from first life, suggests favoring scenarios with fewer non-analog (not seen in life today) to analog (seen in life today) transitions in the inferred first biochemical pathways. Top-down data also indicate that a complex metabolism predated ribozymes and enzymes, and that full cellular autonomy and motility occurred post-LUCA. Using these criteria, we find the alkaline hydrothermal vent microchamber complex scenario with a late evolving exploitation of the natural occurring pH (or Na+ gradient) by ATP synthase the most compelling. However, there are as yet so many unknowns, we also advocate for the continued development of as many plausible scenarios as possible.


Author(s):  
D. Amarsaikhan

Abstract. The aim of this research is to classify urban land cover types using an advanced classification method. As the input bands to the classification, the features derived from Landsat 8 and Sentinel 1A SAR data sets are used. To extract the reliable urban land cover information from the optical and SAR features, a rule-based classification algorithm that uses spatial thresholds defined from the contextual knowledge is constructed. The result of the constructed method is compared with the results of a standard classification technique and it indicates a higher accuracy. Overall, the study demonstrates that the multisource data sets can considerably improve the classification of urban land cover types and the rule-based method is a powerful tool to produce a reliable land cover map.


2020 ◽  
Author(s):  
Lucas R. V. Messias ◽  
Cristiano R. Steffens ◽  
Paulo L. J. Drews-Jr ◽  
Silvia S. C. Botelho

Image enhancement is a critical process in imagebased systems. In these systems, image quality is a crucial factor to achieve a good performance. Scenes with a dynamic range above the capability of the camera or poor lighting are challenging conditions, which usually result in low contrast images, and, with that, we can have the underexposure and/or overexposure problem. In this work, our aim is to restore illexposed images. For this purpose, we present UCAN, a small and fast learning-based model capable to restore and enhance poorly exposed images. The obtained results are evaluated using image quality indicators which show that the proposed network is able to improve images damaged by real and simulated exposure. Qualitative and quantitative results show that the proposed model outperforms the existing models for this objective.


2021 ◽  
Vol 8 (10) ◽  
pp. 43-50
Author(s):  
Truong et al. ◽  

Clustering is a fundamental technique in data mining and machine learning. Recently, many researchers are interested in the problem of clustering categorical data and several new approaches have been proposed. One of the successful and pioneering clustering algorithms is the Minimum-Minimum Roughness algorithm (MMR) which is a top-down hierarchical clustering algorithm and can handle the uncertainty in clustering categorical data. However, MMR tends to choose the category with less value leaf node with more objects, leading to undesirable clustering results. To overcome such shortcomings, this paper proposes an improved version of the MMR algorithm for clustering categorical data, called IMMR (Improved Minimum-Minimum Roughness). Experimental results on actual data sets taken from UCI show that the IMMR algorithm outperforms MMR in clustering categorical data.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Hadi Masoumi ◽  
Bahar Farahani ◽  
Fereidoon Shams Aliee

Purpose Open government data (OGD) has emerged as a radical paradigm shift and endeavor among government administrations across the world mainly due to its promises of transparency, accountability, public-private collaboration, civic participation, social innovation and data-driven value creation. Complexity, cross-cutting nature, diversity of data sets, interoperability and quality issues usually hamper unlocking the full potential value of data. To tackle these challenges, this paper aims to provide a novel solution using a top-down approach. Design/methodology/approach In this paper, the authors propose a systematic ontology-based approach combined with a novel architecture and its corresponding processes enabling organizations to carry out all the steps in the OGD value chain. In addition, an OGD Platform including a portal (www.iranopendata.ir) and a data management system (www.ogdms.iranopendata.ir) are developed to showcase the proposed solution. Findings The efficiency and the applicability of the solution are evaluated by a real-life use case on energy consumption of the buildings of the city of Tehran, Iran. Finally, a comparison was made with existing solutions, and the results show the proposed approach is able to address the existing gaps in the literature. Originality/value The results imply that modeling and designing the data model, as well as exploiting an ontology-based approach are critical pillars to create rich, relevant and well-described OGD data sets. Moreover, clarity on processes, roles and responsibilities are the key factors influencing the quality of the published data services. Thus, to the best of the knowledge, this is the first study that exploits and considers an ontology-based approach in a top-down manner to create OGD data sets.


2008 ◽  
Vol 13 (3) ◽  
pp. 213-225 ◽  
Author(s):  
Albert Orriols-Puig ◽  
Ester Bernadó-Mansilla

2020 ◽  
Vol 39 (3) ◽  
pp. 3825-3837
Author(s):  
Yibin Chen ◽  
Guohao Nie ◽  
Huanlong Zhang ◽  
Yuxing Feng ◽  
Guanglu Yang

Kernel Correlation Filter (KCF) tracker has shown great potential on precision, robustness and efficiency. However, the candidate region used to train the correlation filter is fixed, so tracking is difficult when the target escapes from the search window due to fast motion. In this paper, an improved KCF is put forward for long-term tracking. At first, the moth-flame optimization (MFO) algorithm is introduced into tracking to search for lost target. Then, the candidate sample strategy of KCF tracking method is adjusted by MFO algorithm to make it has the capability of fast motion tracking. Finally, we use the conservative learning correlation filter to judge the moving state of the target, and combine the improved KCF tracker to form a unified tracking framework. The proposed algorithm is tested on a self-made dataset benchmark. Moreover, our method obtains scores for both the distance precision plot (0.891 and 0.842) and overlap success plots (0.631 and 0.601) on the OTB-2013 and OTB-2015 data sets, respectively. The results demonstrate the feasibility and effectiveness compared with the state-of-the-art methods, especially in dealing with fast or uncertain motion.


Author(s):  
Balazs Feil ◽  
Janos Abonyi

This chapter aims to give a comprehensive view about the links between fuzzy logic and data mining. It will be shown that knowledge extracted from simple data sets or huge databases can be represented by fuzzy rule-based expert systems. It is highlighted that both model performance and interpretability of the mined fuzzy models are of major importance, and effort is required to keep the resulting rule bases small and comprehensible. Therefore, in the previous years, soft computing based data mining algorithms have been developed for feature selection, feature extraction, model optimization, and model reduction (rule based simplification). Application of these techniques is illustrated using the wine data classification problem. The results illustrate that fuzzy tools can be applied in a synergistic manner through the nine steps of knowledge discovery.


Sign in / Sign up

Export Citation Format

Share Document