scholarly journals Spheroidal Domains and Geometric Analysis in Euclidean Space

2021 ◽  
pp. 189-209
Author(s):  
Garret Sobczyk

Clifford's geometric algebra has enjoyed phenomenal development over the last 60 years by mathematicians, theoretical physicists, engineers, and computer scientists in robotics, artificial intelligence and data analysis, introducing a myriad of different and often confusing notations. The geometric algebra of Euclidean 3-space, the natural generalization of both the well-known Gibbs-Heaviside vector algebra and Hamilton's quaternions, is used here to study spheroidal domains, spheroidal-graphic projections, the Laplace equation, and its Lie algebra of symmetries. The Cauchy-Kovalevska extension and the Cauchy kernel function are treated in a unified way. The concept of a quasi-monogenic family of functions is introduced and studied. 

Author(s):  
Scott M. Miller

As is well known, analysis of two surfaces in mesh plays a fundamental role in gear theory. In the past, special coordinate systems, vector algebra, or screw theory was used to analyze the kinematics of meshing. The approach here instead relies on geometric algebra, an extension of conventional vector algebra. The elegance of geometric algebra for theoretical developments is demonstrated by examining the so-called “equation of meshing,” which requires that the relative velocity of two bodies at a point of contact be perpendicular to the common surface normal vector. With surprisingly little effort, several alternative forms of the equation of meshing are generated and, subsequently, interpreted geometrically. Via straightforward algebraic manipulations, the results of screw theory and vector algebra are unified. Due to the simplicity with which complex geometric concepts are expressed and manipulated, the effort required to grasp the general three-dimensional meshing of surfaces is minimized.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6209
Author(s):  
Andrei Velichko

Edge computing is a fast-growing and much needed technology in healthcare. The problem of implementing artificial intelligence on edge devices is the complexity and high resource intensity of the most known neural network data analysis methods and algorithms. The difficulty of implementing these methods on low-power microcontrollers with small memory size calls for the development of new effective algorithms for neural networks. This study presents a new method for analyzing medical data based on the LogNNet neural network, which uses chaotic mappings to transform input information. The method effectively solves classification problems and calculates risk factors for the presence of a disease in a patient according to a set of medical health indicators. The efficiency of LogNNet in assessing perinatal risk is illustrated on cardiotocogram data obtained from the UC Irvine machine learning repository. The classification accuracy reaches ~91% with the~3–10 kB of RAM used on the Arduino microcontroller. Using the LogNNet network trained on a publicly available database of the Israeli Ministry of Health, a service concept for COVID-19 express testing is provided. A classification accuracy of ~95% is achieved, and~0.6 kB of RAM is used. In all examples, the model is tested using standard classification quality metrics: precision, recall, and F1-measure. The LogNNet architecture allows the implementation of artificial intelligence on medical peripherals of the Internet of Things with low RAM resources and can be used in clinical decision support systems.


Author(s):  
Ralph Reilly ◽  
Andrew Nyaboga ◽  
Carl Guynes

<p class="MsoNormal" style="text-align: justify; margin: 0in 0.5in 0pt;"><span style="layout-grid-mode: line; font-family: &quot;Times New Roman&quot;,&quot;serif&quot;;"><span style="font-size: x-small;">Facial Information Science is becoming a discipline in its own right, attracting not only computer scientists, but graphic animators and psychologists, all of whom require knowledge to understand how people make and interpret facial expressions. (Zeng, 2009). Computer advancements enhance the ability of researchers to study facial expression. Digitized computer-displayed faces can now be used in studies. Current advancements are facilitating not only the researcher&rsquo;s ability to accurately display information, but recording the subject&rsquo;s reaction automatically.<span style="mso-spacerun: yes;">&nbsp; </span><span style="mso-bidi-font-weight: bold;"><span style="mso-spacerun: yes;">&nbsp;</span></span>With increasing interest in Artificial Intelligence and man-machine communications, what importance does the gender of the user play in the design of today&rsquo;s multi-million dollar applications? Does research suggest that men and women respond to the &ldquo;gender&rdquo; of computer displayed images differently? Can this knowledge be used effectively to design applications specifically for use by men or women? This research is an attempt to understand these questions while studying whether automatic, or pre-attentive, processing plays a part in the identification of the facial expressions.</span></span></p>


Author(s):  
Seonho Kim ◽  
Jungjoon Kim ◽  
Hong-Woo Chun

Interest in research involving health-medical information analysis based on artificial intelligence, especially for deep learning techniques, has recently been increasing. Most of the research in this field has been focused on searching for new knowledge for predicting and diagnosing disease by revealing the relation between disease and various information features of data. These features are extracted by analyzing various clinical pathology data, such as EHR (electronic health records), and academic literature using the techniques of data analysis, natural language processing, etc. However, still needed are more research and interest in applying the latest advanced artificial intelligence-based data analysis technique to bio-signal data, which are continuous physiological records, such as EEG (electroencephalography) and ECG (electrocardiogram). Unlike the other types of data, applying deep learning to bio-signal data, which is in the form of time series of real numbers, has many issues that need to be resolved in preprocessing, learning, and analysis. Such issues include leaving feature selection, learning parts that are black boxes, difficulties in recognizing and identifying effective features, high computational complexities, etc. In this paper, to solve these issues, we provide an encoding-based Wave2vec time series classifier model, which combines signal-processing and deep learning-based natural language processing techniques. To demonstrate its advantages, we provide the results of three experiments conducted with EEG data of the University of California Irvine, which are a real-world benchmark bio-signal dataset. After converting the bio-signals (in the form of waves), which are a real number time series, into a sequence of symbols or a sequence of wavelet patterns that are converted into symbols, through encoding, the proposed model vectorizes the symbols by learning the sequence using deep learning-based natural language processing. The models of each class can be constructed through learning from the vectorized wavelet patterns and training data. The implemented models can be used for prediction and diagnosis of diseases by classifying the new data. The proposed method enhanced data readability and intuition of feature selection and learning processes by converting the time series of real number data into sequences of symbols. In addition, it facilitates intuitive and easy recognition, and identification of influential patterns. Furthermore, real-time large-capacity data analysis is facilitated, which is essential in the development of real-time analysis diagnosis systems, by drastically reducing the complexity of calculation without deterioration of analysis performance by data simplification through the encoding process.


Qui Parle ◽  
2021 ◽  
Vol 30 (1) ◽  
pp. 119-157
Author(s):  
Brett Zehner

Abstract This methodologically important essay aims to trace a genealogical account of Herbert Simon’s media philosophy and to contest the histories of artificial intelligence that overlook the organizational capacities of computational models. As Simon’s work demonstrates, humans’ subjection to large-scale organizations and divisions of labor is at the heart of artificial intelligence. As such, questions of procedures are key to understanding the power assumed by institutions wielding artificial intelligence. Most media-historical accounts of the development of contemporary artificial intelligence stem from the work of Warren S. McCulloch and Walter Pitts, especially the 1943 essay “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Yet Simon’s revenge is perhaps that reinforcement learning systems adopt his prescriptive approach to algorithmic procedures. Computer scientists criticized Simon for the performative nature of his artificially intelligent systems, mainly for his positivism, but he defended his positivism based on his belief that symbolic computation could stand in for any reality and in fact shape that reality. Simon was not looking to actually re-create human intelligence; he was using coercion, bad faith, and fraud as tactical weapons in the reordering of human decision-making. Artificial intelligence was the perfect medium for his explorations.


Sign in / Sign up

Export Citation Format

Share Document