TU-C-17A-03: An Integrated Contour Evaluation Software Tool Using Supervised Pattern Recognition for Radiotheray

2014 ◽  
Vol 41 (6Part26) ◽  
pp. 458-458
Author(s):  
H Chen ◽  
J Tan ◽  
J Kavanaugh ◽  
S Dolly ◽  
H Gay ◽  
...  
2018 ◽  
Vol 2018 ◽  
pp. 1-14
Author(s):  
Alma Delia Cuevas Rasgado

Conjunctions have different interpretations: they eliminate redundancies: “María se bañó y se peinó” (Maria bathed and she combed her hair), unite different ideas: “Hoy llovió y no fui a corer” (Today it rained and I did not go to run) and use of lists: “Eran Alma, Edith y Omar” (They were Alma, Edith and Omar). Conjunctions take different semantic contexts. We understand each other because of the common sense despite expressing ourselves incorrectly from the standpoint of semantics, but for a computer it is difficult. In order to “understand” the sentences, the machine must solve semantics problems; this article exposes one of these problems. ANACONJ is an algorithm of pattern recognition of texts, which uses rules and syntactic patterns that analyze each word of a sentence in a phrase, identifying those sentences with conjunctions to build a semantic tree of the sentence where the conjunction connects words (nouns, verbs, etc.) according to their meaning. ANACONJ could be used as a teaching Spanish software tool and as an app for a service robot too.


Robotica ◽  
2002 ◽  
Vol 20 (5) ◽  
pp. 499-508
Author(s):  
Jie Yang ◽  
Chenzhou Ye ◽  
Nianyi Chen

SummaryA software tool for data mining (DMiner-I) is introduced, which integrates pattern recognition (PCA, Fisher, clustering, HyperEnvelop, regression), artificial intelligence (knowledge representation, decision trees), statistical learning (rough set, support vector machine), and computational intelligence (neural network, genetic algorithm, fuzzy systems). It consists of nine function models: pattern recognition, decision trees, association rule, fuzzy rule, neural network, genetic algorithm, HyperEnvelop, support vector machine and visualization. The principle, algorithms and knowledge representation of some function models of data mining are described. Nonmonotony in data mining is dealt with by concept hierarchy and layered mining. The software tool of data mining is realized byVisual C++under Windows 2000. The software tool of data mining has been satisfactorily applied in the prediction of regularities of the formation of ternary intermetallic compounds in alloy systems, and diagnosis of brain glioma.


Author(s):  
Lior Shamir

Abstract Computing machines allow quantitative analysis of large databases of text, providing knowledge that is difficult to obtain without using automation. This article describes Universal Data Analysis of Text (UDAT) —a text analysis method that extracts a large set of numerical text content descriptors from text files and performs various pattern recognition tasks such as classification, similarity between classes, correlation between text and numerical values, and query by example. Unlike several previously proposed methods, UDAT is not based on frequency of words and links between certain key words and topics. The method is implemented as an open-source software tool that can provide detailed reports about the quantitative analysis of sets of text files, as well as exporting the numerical text content descriptors in the form of comma-separated values files to allow statistical or pattern recognition analysis with external tools. It also allows the identification of specific text descriptors that differentiate between classes or correlate with numerical values and can be applied to problems related to knowledge discovery in domains such as literature and social media. UDAT is implemented as a command-line tool that runs in Windows, and the open source is available and can be compiled in Linux systems. UDAT can be downloaded from http://people.cs.ksu.edu/∼lshamir/downloads/udat.


Author(s):  
Kiril I. Tenekedjiev ◽  
◽  
Carlos A. Kobashikawa ◽  
Natalia D. Nikolova ◽  
Kaoru Hirota ◽  
...  

A Bayesian pattern recognition system is proposed, that processes information encoded by four types of features: discrete, pseudo-discrete, multi-normal continuous and independent continuous. This hybrid system utilizes the combined frequentist-subjective approach to probabilities, uses parametric and nonparametric techniques for the conditional likelihood estimation, and relies heavily on the fuzzy theory for data presentation, learning, and information fusion. The information for training, recognition, and prediction of the system is organized in a database, which is logically structured into three interconnected hierarchical sub-databases. A software tool is created under MATLAB that assures consistency, integrity, and maintenance of the database information. Three application examples from the fields of technical and medical diagnostics are presented, which illustrate the types of problems and levels of complexity that the database tool can handle.


Author(s):  
G.Y. Fan ◽  
J.M. Cowley

In recent developments, the ASU HB5 has been modified so that the timing, positioning, and scanning of the finely focused electron probe can be entirely controlled by a host computer. This made the asynchronized handshake possible between the HB5 STEM and the image processing system which consists of host computer (PDP 11/34), DeAnza image processor (IP 5000) which is interfaced with a low-light level TV camera, array processor (AP 400) and various peripheral devices. This greatly facilitates the pattern recognition technique initiated by Monosmith and Cowley. Software called NANHB5 is under development which, instead of employing a set of photo-diodes to detect strong spots on a TV screen, uses various software techniques including on-line fast Fourier transform (FFT) to recognize patterns of greater complexity, taking advantage of the sophistication of our image processing system and the flexibility of computer software.


Author(s):  
L. Fei ◽  
P. Fraundorf

Interface structure is of major interest in microscopy. With high resolution transmission electron microscopes (TEMs) and scanning probe microscopes, it is possible to reveal structure of interfaces in unit cells, in some cases with atomic resolution. A. Ourmazd et al. proposed quantifying such observations by using vector pattern recognition to map chemical composition changes across the interface in TEM images with unit cell resolution. The sensitivity of the mapping process, however, is limited by the repeatability of unit cell images of perfect crystal, and hence by the amount of delocalized noise, e.g. due to ion milling or beam radiation damage. Bayesian removal of noise, based on statistical inference, can be used to reduce the amount of non-periodic noise in images after acquisition. The basic principle of Bayesian phase-model background subtraction, according to our previous study, is that the optimum (rms error minimizing strategy) Fourier phases of the noise can be obtained provided the amplitudes of the noise is given, while the noise amplitude can often be estimated from the image itself.


1989 ◽  
Vol 34 (11) ◽  
pp. 988-989
Author(s):  
Erwin M. Segal
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document