scholarly journals Sense Theory. Sense Function

2019 ◽  
Author(s):  
Egger L. Mielberg

The Sense Theory is not a part of traditional mathematics. It is a new paradigm of how we can formalize complex cognitive processes of the human brain. The basis of the theory is a sense function which determines sense conformity between a set of objects or/and events and a single subject (described object/event). The sense function has a series of unique properties that can help find associative connections between trillions different-type objects/events.By the function, we can investigate a whole process of forming a single sense of big data set of different business or scientific events.

2019 ◽  
Author(s):  
Egger L. Mielberg

Cognitive processes of human brain are strongly tied to such a well-known part of the brain as cortex. All psychological, logical or illogical solutions made by a human being are the result of the cortex. Thus, the maximum approximation of mathematical theory to the processes of the cortex can become a good trampoline to the creation of a self-learning intellectual system, a Real Artificial Intelligence.We propose a new concept of mathematical theory which gives a possibility to form, find and separate senses of two or more objects of different nature. The theory encompasses the knowledge of cybernetics, linguistics, neurobiology, and classical mathematics.The Sense Theory is not a part of traditional mathematics as we know it now, it is a new paradigm of how we can formalize complex cognitive processes of the human brain.


2021 ◽  
Vol 251 ◽  
pp. 01047
Author(s):  
Lei Qiong

Big data has become a new factor of production in the era of digital governance. Big data has changed the thinking and mode of decision-making, enabled governance decision-making, and created a new paradigm of decision-making. However, in the actual application process, due to the limitations of data availability, time, cost, cognitive and psychological factors, it is often inconsistent with panoramic data. How to judge and solve the inconsistency between the data subset and the data set is the premise of making scientific decision. From the perspective of “consistency”, this paper constructs the basic assumptions of governance decision-making, combs and explains the solution path of consistency between the available data subset and the expected data set, expands the research perspective of measuring data similarity, and deepens the feasibility of data-driven enabling governance decision-making.


2016 ◽  
Vol 12 (2) ◽  
pp. 4255-4259
Author(s):  
Michael A Persinger ◽  
David A Vares ◽  
Paula L Corradini

                The human brain was assumed to be an elliptical electric dipole. Repeated quantitative electroencephalographic measurements over several weeks were completed for a single subject who sat in either a magnetic eastward or magnetic southward direction. The predicted potential difference equivalence for the torque while facing perpendicular (west-to-east) to the northward component of the geomagnetic field (relative to facing south) was 4 μV. The actual measurement was 10 μV. The oscillation frequency around the central equilibrium based upon the summed units of neuronal processes within the cerebral cortices for the moment of inertia was 1 to 2 ms which are the boundaries for the action potential of axons and the latencies for diffusion of neurotransmitters. The calculated additional energy available to each neuron within the human cerebrum during the torque condition was ~10-20 J which is the same order of magnitude as the energy associated with action potentials, resting membrane potentials, and ligand-receptor binding. It is also the basic energy at the level of the neuronal cell membrane that originates from gravitational forces upon a single cell and the local expression of the uniaxial magnetic anisotropic constant for ferritin which occurs in the brain. These results indicate that the more complex electrophysiological functions that are strongly correlated with cognitive and related human properties can be described by basic physics and may respond to specific geomagnetic spatial orientation.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1690
Author(s):  
Teague Tomesh ◽  
Pranav Gokhale ◽  
Eric R. Anschuetz ◽  
Frederic T. Chong

Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigated using this paradigm to perform k-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We used numerical simulations to compare the performance of this approach to classical k-means clustering. We were able to find data sets with which coresets work well relative to random sampling and where QAOA could potentially outperform standard k-means on a coreset. However, finding data sets where both coresets and QAOA work well—which is necessary for a quantum advantage over k-means on the entire data set—appears to be challenging.


Author(s):  
Yihao Tian

Big data is an unstructured data set with a considerable volume, coming from various sources such as the internet, business organizations, etc., in various formats. Predicting consumer behavior is a core responsibility for most dealers. Market research can show consumer intentions; it can be a big order for a best-designed research project to penetrate the veil, protecting real customer motivations from closer scrutiny. Customer behavior usually focuses on customer data mining, and each model is structured at one stage to answer one query. Customer behavior prediction is a complex and unpredictable challenge. In this paper, advanced mathematical and big data analytical (BDA) methods to predict customer behavior. Predictive behavior analytics can provide modern marketers with multiple insights to optimize efforts in their strategies. This model goes beyond analyzing historical evidence and making the most knowledgeable assumptions about what will happen in the future using mathematical. Because the method is complex, it is quite straightforward for most customers. As a result, most consumer behavior models, so many variables that produce predictions that are usually quite accurate using big data. This paper attempts to develop a model of association rule mining to predict customers’ behavior, improve accuracy, and derive major consumer data patterns. The finding recommended BDA method improves Big data analytics usability in the organization (98.2%), risk management ratio (96.2%), operational cost (97.1%), customer feedback ratio (98.5%), and demand prediction ratio (95.2%).


2021 ◽  
pp. 014544552110540
Author(s):  
Nihal Sen

The purpose of this study is to provide a brief introduction to effect size calculation in single-subject design studies, including a description of nonparametric and regression-based effect sizes. We then focus the rest of the tutorial on common regression-based methods used to calculate effect size in single-subject experimental studies. We start by first describing the difference between five regression-based methods (Gorsuch, White et al., Center et al., Allison and Gorman, Huitema and McKean). This is followed by an example using the five regression-based effect size methods and a demonstration how these methods can be applied using a sample data set. In this way, the question of how the values obtained from different effect size methods differ was answered. The specific regression models used in these five regression-based methods and how these models can be obtained from the SPSS program were shown. R2 values obtained from these five methods were converted to Cohen’s d value and compared in this study. The d values obtained from the same data set were estimated as 0.003, 0.357, 2.180, 3.470, and 2.108 for the Allison and Gorman, Gorsuch, White et al., Center et al., as well as for Huitema and McKean methods, respectively. A brief description of selected statistical programs available to conduct regression-based methods was given.


2018 ◽  
Vol 56 (1) ◽  
pp. 6-25 ◽  
Author(s):  
Xiao Jia ◽  
Jin Chen ◽  
Liang Mei ◽  
Qian Wu

Purpose The purpose of this paper is to answer the following two questions: What are the influences of the top managers’ different leadership styles on organizational innovation? What is the mechanism by which the different leaderships exert different effects on organizational innovation? Design/methodology/approach To test the hypothesized model, a data set based on 133 MBA part-time students from Tsinghua University and Zhejiang University in China was built, after interviewing several top managers as a pilot study. With the help of SPSS macro, hierarchical regression and bootstrapping analysis, the paper analyzes the effects of two leadership styles on innovation performance, through the mediation mechanism of openness involving open breadth and open depth. Findings The results indicate that transformational leadership enhances, while transactional leadership reduces, the organizational innovation performance. The openness breadth and openness depth not only mediate the beneficial effect of transformational leadership on innovation, but also mediate the deleterious effect of transactional leadership on innovation. Originality/value This study empirically explores the different functions of transformational leadership and transactional leadership for leading organizational innovation performance. Furthermore, a new form of organization is an open design or strategy that allows more external knowledge and resources to be absorbed, which is claimed as a new paradigm for organization innovation. This study integrates the concepts of breadth of openness and depth of openness on the basis of open innovation literature, as an intermediate mechanism to explain the different effects of the two forms of top managers’ leadership.


Author(s):  
Haixuan Zhu ◽  
◽  
Xiaoyu Jia ◽  
Pengluo Que ◽  
Xiaoyu Hou ◽  
...  

In the era of big data, with the development of computer technology, especially the comprehensive popularization of mobile terminal device and the gradual construction of the Internet of Things, the urban physical environment and social environment have been comprehensively digitized and quantified. Computational thinking mode has gradually become a new thinking mode for human beings to recognize and govern urban complex system. Meanwhile computational urban science has become the main discipline development aspect of modern urban planning. Computational thinking is the thinking of computer science using algorithms based on time complexity and space complexity, which provides a new paradigm for the construction of index system, data collection, data storage, data analysis, pattern recognition, dynamic governance in the process of scientific planning and urban management. Based on this, this paper takes the computational thinking mode of urban planning discipline in big data era as the research object, takes the scientific construction of computational urban planning as the research purpose, and adopts literature research methods and interdisciplinary research methods, comprehensively studies the connotation of the computing thinking mode of computer science. Meanwhile, this paper systematically discusses the system construction of urban computing, model generation, the theory and method of digital twinning, as well as the popularization of the computational thinking mode of urban and rural planning discipline and the scientific research of computational urban planning, which responds to the needs of the era of the development of urban and rural planning disciplines in the era of big data.


2019 ◽  
Vol 2 ◽  
pp. 1-6
Author(s):  
Wenjuan Lu ◽  
Aiguo Liu ◽  
Chengcheng Zhang

<p><strong>Abstract.</strong> With the development of geographic information technology, the way to get geographical information is constantly, and the data of space-time is exploding, and more and more scholars have started to develop a field of data processing and space and time analysis. In this, the traditional data visualization technology is high in popularity and simple and easy to understand, through simple pie chart and histogram, which can reveal and analyze the characteristics of the data itself, but still cannot combine with the map better to display the hidden time and space information to exert its application value. How to fully explore the spatiotemporal information contained in massive data and accurately explore the spatial distribution and variation rules of geographical things and phenomena is a key research problem at present. Based on this, this paper designed and constructed a universal thematic data visual analysis system that supports the full functions of data warehousing, data management, data analysis and data visualization. In this paper, Weifang city is taken as the research area, starting from the aspects of rainfall interpolation analysis and population comprehensive analysis of Weifang, etc., the author realizes the fast and efficient display under the big data set, and fully displays the characteristics of spatial and temporal data through the visualization effect of thematic data. At the same time, Cassandra distributed database is adopted in this research, which can also store, manage and analyze big data. To a certain extent, it reduces the pressure of front-end map drawing, and has good query analysis efficiency and fast processing ability.</p>


A large volume of datasets is available in various fields that are stored to be somewhere which is called big data. Big Data healthcare has clinical data set of every patient records in huge amount and they are maintained by Electronic Health Records (EHR). More than 80 % of clinical data is the unstructured format and reposit in hundreds of forms. The challenges and demand for data storage, analysis is to handling large datasets in terms of efficiency and scalability. Hadoop Map reduces framework uses big data to store and operate any kinds of data speedily. It is not solely meant for storage system however conjointly a platform for information storage moreover as processing. It is scalable and fault-tolerant to the systems. Also, the prediction of the data sets is handled by machine learning algorithm. This work focuses on the Extreme Machine Learning algorithm (ELM) that can utilize the optimized way of finding a solution to find disease risk prediction by combining ELM with Cuckoo Search optimization-based Support Vector Machine (CS-SVM). The proposed work also considers the scalability and accuracy of big data models, thus the proposed algorithm greatly achieves the computing work and got good results in performance of both veracity and efficiency.


Sign in / Sign up

Export Citation Format

Share Document