Confidence complexity of computer algorithms

Author(s):  
A. A. Kiktenko ◽  
M. N. Lunkovskiy ◽  
K. A. Nikiforov
Keyword(s):  
Author(s):  
W.A. Carrington ◽  
F.S. Fay ◽  
K.E. Fogarty ◽  
L. Lifshitz

Advances in digital imaging microscopy and in the synthesis of fluorescent dyes allow the determination of 3D distribution of specific proteins, ions, GNA or DNA in single living cells. Effective use of this technology requires a combination of optical and computer hardware and software for image restoration, feature extraction and computer graphics.The digital imaging microscope consists of a conventional epifluorescence microscope with computer controlled focus, excitation and emission wavelength and duration of excitation. Images are recorded with a cooled (-80°C) CCD. 3D images are obtained as a series of optical sections at .25 - .5 μm intervals.A conventional microscope has substantial blurring along its optical axis. Out of focus contributions to a single optical section cause low contrast and flare; details are poorly resolved along the optical axis. We have developed new computer algorithms for reversing these distortions. These image restoration techniques and scanning confocal microscopes yield significantly better images; the results from the two are comparable.


2009 ◽  
Vol 14 (2) ◽  
pp. 142-152 ◽  
Author(s):  
Johannes B.J. Bussmann ◽  
Ulrich W. Ebner-Priemer ◽  
Jochen Fahrenberg

Behavior is central to psychology in almost any definition. Although observable activity is a core aspect of behavior, assessment strategies have tended to focus on emotional, cognitive, or physiological responses. When physical activity is assessed, it is done so mostly with questionnaires. Converging evidence of only a moderate association between self-reports of physical activity and objectively measured physical activity does raise questions about the validity of these self-reports. Ambulatory activity monitoring, defined as the measurement strategy to assess physical activity, posture, and movement patterns continuously in everyday life, has made major advances over the last decade and has considerable potential for further application in the assessment of observable activity, a core aspect of behavior. With new piezoresistive sensors and advanced computer algorithms, the objective measurement of physical activity, posture, and movement is much more easily achieved and measurement precision has improved tremendously. With this overview, we introduce to the reader some recent developments in ambulatory activity monitoring. We will elucidate the discrepancies between objective and subjective reports of activity, outline recent methodological developments, and offer the reader a framework for developing insight into the state of the art in ambulatory activity-monitoring technology, discuss methodological aspects of time-based design and psychometric properties, and demonstrate recent applications. Although not yet main stream, ambulatory activity monitoring – especially in combination with the simultaneous assessment of emotions, mood, or physiological variables – provides a comprehensive methodology for psychology because of its suitability for explaining behavior in context.


2020 ◽  
Author(s):  
Abdulrahman Takiddin ◽  
Jens Schneider ◽  
Yin Yang ◽  
Alaa Abd-Alrazaq ◽  
Mowafa Househ

BACKGROUND Skin cancer is the most common cancer type affecting humans. Traditional skin cancer diagnosis methods are costly, require a professional physician, and take time. Hence, to aid in diagnosing skin cancer, Artificial Intelligence (AI) tools are being used, including shallow and deep machine learning-based techniques that are trained to detect and classify skin cancer using computer algorithms and deep neural networks. OBJECTIVE The aim of this study is to identify and group the different types of AI-based technologies used to detect and classify skin cancer. The study also examines the reliability of the selected papers by studying the correlation between the dataset size and number of diagnostic classes with the performance metrics used to evaluate the models. METHODS We conducted a systematic search for articles using IEEE Xplore, ACM DL, and Ovid MEDLINE databases following the PRISMA Extension for Scoping Reviews (PRISMA-ScR) guidelines. The study included in this scoping review had to fulfill several selection criteria; to be specifically about skin cancer, detecting or classifying skin cancer, and using AI technologies. Study selection and data extraction were conducted by two reviewers independently. Extracted data were synthesized narratively, where studies were grouped based on the diagnostic AI techniques and their evaluation metrics. RESULTS We retrieved 906 papers from the 3 databases, but 53 studies were eligible for this review. While shallow techniques were used in 14 studies, deep techniques were utilized in 39 studies. The studies used accuracy (n=43/53), the area under receiver operating characteristic curve (n=5/53), sensitivity (n=3/53), and F1-score (n=2/53) to assess the proposed models. Studies that use smaller datasets and fewer diagnostic classes tend to have higher reported accuracy scores. CONCLUSIONS The adaptation of AI in the medical field facilitates the diagnosis process of skin cancer. However, the reliability of most AI tools is questionable since small datasets or low numbers of diagnostic classes are used. In addition, a direct comparison between methods is hindered by a varied use of different evaluation metrics and image types.


Author(s):  
Mark Newman

This chapter introduces some of the fundamental concepts of numerical network calculations. The chapter starts with a discussion of basic concepts of computational complexity and data structures for storing network data, then progresses to the description and analysis of algorithms for a range of network calculations: breadth-first search and its use for calculating shortest paths, shortest distances, components, closeness, and betweenness; Dijkstra's algorithm for shortest paths and distances on weighted networks; and the augmenting path algorithm for calculating maximum flows, minimum cut sets, and independent paths in networks.


Author(s):  
Mark Newman

The study of networks, including computer networks, social networks, and biological networks, has attracted enormous interest in recent years. The rise of the Internet and the wide availability of inexpensive computers have made it possible to gather and analyse network data on an unprecendented scale, and the development of new theoretical tools has allowed us to extract knowledge from networks of many different kinds. The study of networks is broadly interdisciplinary and developments have occurred in many fields, including mathematics, physics, computer and information sciences, biology, and the social science. This book brings together the most important breakthroughts in each of these fields and presents them in a unified fashion, highlighting the strong interconnections between work in different areas. Topics covered include the measurement of networks; methods for analysing network data, including methods developed in physics, statistics, and sociology; fundamentals of graph theory; computer algorithms, including spectral algorithms and community detection; mathematical models of networks such as random graph models and generative models; and models of processes taking place on networks.


Processes ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1071
Author(s):  
Lucia Billeci ◽  
Asia Badolato ◽  
Lorenzo Bachi ◽  
Alessandro Tonacci

Alzheimer’s disease is notoriously the most common cause of dementia in the elderly, affecting an increasing number of people. Although widespread, its causes and progression modalities are complex and still not fully understood. Through neuroimaging techniques, such as diffusion Magnetic Resonance (MR), more sophisticated and specific studies of the disease can be performed, offering a valuable tool for both its diagnosis and early detection. However, processing large quantities of medical images is not an easy task, and researchers have turned their attention towards machine learning, a set of computer algorithms that automatically adapt their output towards the intended goal. In this paper, a systematic review of recent machine learning applications on diffusion tensor imaging studies of Alzheimer’s disease is presented, highlighting the fundamental aspects of each work and reporting their performance score. A few examined studies also include mild cognitive impairment in the classification problem, while others combine diffusion data with other sources, like structural magnetic resonance imaging (MRI) (multimodal analysis). The findings of the retrieved works suggest a promising role for machine learning in evaluating effective classification features, like fractional anisotropy, and in possibly performing on different image modalities with higher accuracy.


2021 ◽  
Vol 2021 (8) ◽  
Author(s):  
Anamaría Font ◽  
Bernardo Fraiman ◽  
Mariana Graña ◽  
Carmen A. Núñez ◽  
Héctor Parra De Freitas

Abstract Compactifications of the heterotic string on special Td/ℤ2 orbifolds realize a landscape of string models with 16 supercharges and a gauge group on the left-moving sector of reduced rank d + 8. The momenta of untwisted and twisted states span a lattice known as the Mikhailov lattice II(d), which is not self-dual for d > 1. By using computer algorithms which exploit the properties of lattice embeddings, we perform a systematic exploration of the moduli space for d ≤ 2, and give a list of maximally enhanced points where the U(1)d+8 enhances to a rank d + 8 non-Abelian gauge group. For d = 1, these groups are simply-laced and simply-connected, and in fact can be obtained from the Dynkin diagram of E10. For d = 2 there are also symplectic and doubly-connected groups. For the latter we find the precise form of their fundamental groups from embeddings of lattices into the dual of II(2). Our results easily generalize to d > 2.


2021 ◽  
pp. 030631272110109
Author(s):  
Ole Pütz

The formulation of computer algorithms requires the elimination of vagueness. This elimination of vagueness requires exactness in programming, and this exactness can be traced to meeting talk, where it intersects with the indexicality of expressions. This article is concerned with sequences in which a team of computer scientists discuss the functionality of prototypes that are already implemented or possibly to be implemented. The analysis focuses on self-repair because this is a practice where participants can be seen to orient to meanings of different expressions as alternatives. By using self-repair, the computer scientists show a concern with exact descriptions when they talk about existing functionality of their prototypes but not when they talk about potential future functionality. Instead, when participants talk about potential future functionality and attend to meanings during self-repair, they use vague expressions to indicate possibilities. Furthermore, when the computer scientists talk to external stakeholders, they indicate through hedges whenever their descriptions approximate already implemented technical functionality but do not describe it exactly. The article considers whether the code of working prototypes can be said to fix meanings of expressions and how we may account for human agency and non-human resistances during development.


Sign in / Sign up

Export Citation Format

Share Document