scholarly journals USING COMBINATORIAL MAPS FOR ALGORITHMS ON GRAPHS

2021 ◽  
Vol 37 (3) ◽  
pp. 185-200
Author(s):  
Robert Cori

The aim of this paper is to come back to a data structure representation of graph by permutations. This originated in the years 1960-1970 by contributions due to J. Edmonds [7], A. Jacques [11], W. Tutte [22] in order to consider the embedding of a graph in a surface as a combinatorial object. Some algebraic developments where suggested in [4] and [12]. It was also used for implementation in different situation, like planarity testing by H. de Fraysseix and P. Rosenstiehl [6], computer vision by G. Damiand  and A. Dupas [5] or formal proofs by G. Gonthier [9].

2013 ◽  
Vol 75 (3) ◽  
pp. 149-156 ◽  
Author(s):  
Xin Feng ◽  
Yuanzhen Wang ◽  
Yanlin Weng ◽  
Yiying Tong

1978 ◽  
Vol 3 (3) ◽  
pp. 193-201
Author(s):  
Stephen J. Hegner ◽  
Ruth Anne Maulucci

2020 ◽  
Author(s):  
Shadrack Lusi Muma ◽  
Dickens Omondi Aduda ◽  
Patrick Onyango Ogola

Abstract Background: Numerous factors have been shown to reduce symptomatic and non-symptomatic forms for computer vision syndrome. However, little is known on the impact among computer users diagnosed with severe symptoms of computer vision syndrome. The study assessed whether reduced visual acuity, ocular pathology and refractive error are associated with computer vision syndrome. Methods: A cross sectional university based study in Kenya. Seven hundred and eighty three participants were included in the study. Visual acuity was determined using snellens chart and converted to logMAR chart. Ocular pathology was determined through comprehensive examination using a slit lamp. Computer vision syndrome was determined using a validated questionnaire. Finally Retinoscopy was conducted to determine the type of refractive error. Results: Participants with refractive error above ± 0.50 dioptres had a greater odds, multivariate adjusted ratio 0.73 (95% CI 0.63-0.90) for developing computer vision syndrome. Similar to visual acuity with multivariate adjusted odds ratio of 0.31 (95% CI 0.24-0.47) and ocular pathologies being significantly associated with computer vision syndrome (p=.04). Ocular condition like sub conjunctival hemorrhage was not significantly associated with computer vision syndrome (P=.12). Conclusion: Reduced visual acuity, presence of ocular pathology and refractive error were associated with greater likelihood of computer vision syndrome. Particularly among those who had never had optical correction. Eye care providers are well placed to come up with proper diagnosis of CVS.


2011 ◽  
Vol 48-49 ◽  
pp. 21-24 ◽  
Author(s):  
Xian Ping Fu ◽  
Sheng Long Liao

As the electronic industry advances rapidly toward automatic manufacturing smaller, faster, and cheaper products, computer vision play more important role in IC packaging technology than before. One of the important tasks of computer vision is finding target position through similarity matching. Similarity matching requires distance computation of feature vectors for each target image. In this paper we propose a projection transform of wavelet coefficient based multi resolution data-structure algorithm for faster template matching, a position sequence of local sharp variation points in such signals is recorded as features. The proposed approach reduces the number of computation by around 70% over multi resolution data structure algorithm. We use the proposed approach to match similarity between wavelet parameters histograms for image matching. It is noticeable that the proposed fast algorithm provides not only the same retrieval results as the exhaustive search, but also a faster searching ability than existing fast algorithms. The proposed approach can be easily combined with existing algorithms for further performance enhancement.


2015 ◽  
Vol 15 (4-5) ◽  
pp. 726-741 ◽  
Author(s):  
NATALIIA STULOVA ◽  
JOSÉ F. MORALES ◽  
MANUEL V. HERMENEGILDO

AbstractThe use of annotations, referred to as assertions or contracts, to describe program properties for which run-time tests are to be generated, has become frequent in dynamic programing languages. However, the frameworks proposed to support such run-time testing generally incur high time and/or space overheads over standard program execution. We present an approach for reducing this overhead that is based on the use of memoization to cache intermediate results of check evaluation, avoiding repeated checking of previously verified properties. Compared to approaches that reduce checking frequency, our proposal has the advantage of being exhaustive (i.e., all tests are checked at all points) while still being much more efficient than standard run-time checking. Compared to the limited previous work on memoization, it performs the task without requiring modifications to data structure representation or checking code. While the approach is general and system-independent, we present it for concreteness in the context of the Ciao run-time checking framework, which allows us to provide an operational semantics with checks and caching. We also report on a prototype implementation and provide some experimental results that support that using a relatively small cache leads to significant decreases in run-time checking overhead.


Detection of Anomaly is of a notable and emergent problem into many diverse fields like information theory, deep learning, computer vision, machine learning, and statistics that have been researched within the various application from diverse domains including agriculture, health care, banking, education, and transport anomaly detection. Newly, numbers of important anomaly detection techniques along with diverseness of sort have been watched. The main aim of this paper to come up with a broad summary of the present development on detection of an anomaly, exclusively for video data with mixed types and high dimensionalities, where identifying the anomalous behaviors and event or anomalous patterns is a significant task. The paper expresses the advantages and disadvantages of the detection methods the experiments tried on the publically available benchmark dataset to assess numerous popular and classical methods and models. The objective of this analysis is to furnish an understanding of recent computer vision and machine algorithms methods and also state-of-the-art deep learnings techniques to detect anomalies for researchers. At last, the paper delivered roughly directions for future research on an anomalies detection.


2012 ◽  
Vol 11 (1) ◽  
pp. 15-24
Author(s):  
Wenyu Chen ◽  
Yusha Li ◽  
Jianjiang Pan ◽  
Jianmin Zheng ◽  
Yiyu Cai

Less control points are needed to represent a shape in T-Splines compared to NURBS and subsequently less time is spent in modeling. While getting more and more accepted by commercial software, T-splines, however, are yet part of VRML/X3D. The T-spline VRML is proposed in this work. An effective data structure is designed for T-splines to support online visualization. Compared to the NURBS and the polygonal representations, the proposed T-spline data structure representation can significantly reduce the VRML file size which is a central concern in online applications. As such, complex objects modeled in T-spline form have better chances for real-time transfer from servers to clients. Similar to other VRML nodes, T-spline VRML node can support geometry, color and texture. Users can interact with T-spline more effectively for LOD and animation applications.


Sign in / Sign up

Export Citation Format

Share Document