scholarly journals Multi-Region Neural Representation: A novel model for decoding visual stimuli in human brains

2017 ◽  
Author(s):  
Muhammad Yousefnezhad ◽  
Daoqiang Zhang

AbstractMultivariate Pattern (MVP) classification holds enormous potential for decoding visual stimuli in the human brain by employing task-based fMRI data sets. There is a wide range of challenges in the MVP techniques, i.e. decreasing noise and sparsity, defining effective regions of interest (ROIs), visualizing results, and the cost of brain studies. In overcoming these challenges, this paper proposes a novel model of neural representation, which can automatically detect the active regions for each visual stimulus and then utilize these anatomical regions for visualizing and analyzing the functional activities. Therefore, this model provides an opportunity for neuroscientists to ask this question: what is the effect of a stimulus on each of the detected regions instead of just study the fluctuation of voxels in the manually selected ROIs. Moreover, our method introduces analyzing snapshots of brain image for decreasing sparsity rather than using the whole of fMRI time series. Further, a new Gaussian smoothing method is proposed for removing noise of voxels in the level of ROIs. The proposed method enables us to combine different fMRI data sets for reducing the cost of brain studies. Experimental studies on 4 visual categories (words, consonants, objects and nonsense photos) confirm that the proposed method achieves superior performance to state-of-the-art methods.

2016 ◽  
Author(s):  
Muhammad Yousefnezhad ◽  
Daoqiang Zhang

AbstractA universal unanswered question in neuroscience and machine learning is whether computers can decode the patterns of the human brain. Multi-Voxels Pattern Analysis (MVPA) is a critical tool for addressing this question. However, there are two challenges in the previous MVPA methods, which include decreasing sparsity and noises in the extracted features and increasing the performance of prediction. In overcoming mentioned challenges, this paper proposes Anatomical Pattern Analysis (APA) for decoding visual stimuli in the human brain. This framework develops a novel anatomical feature extraction method and a new imbalance AdaBoost algorithm for binary classification. Further, it utilizes an Error-Correcting Output Codes (ECOC) method for multi-class prediction. APA can automatically detect active regions for each category of the visual stimuli. Moreover, it enables us to combine homogeneous datasets for applying advanced classification. Experimental studies on 4 visual categories (words, consonants, objects and scrambled photos) demonstrate that the proposed approach achieves superior performance to state-of-the-art methods.


Author(s):  
Guangming Xing

Classification/clustering of XML documents based on their structural information is important for many tasks related with document management. In this chapter, we present a suite of algorithms to compute the cost for approximate matching between XML documents and schemas. A framework for classifying/clustering XML documents by structure is then presented based on the computation of distances between XML documents and schemas. The backbone of the framework is the feature representation using a vector of the distances. Experimental studies were conducted on various XML data sets, suggesting the efficiency and effectiveness of our approach as a solution for structural classification/clustering of XML documents.


2016 ◽  
Author(s):  
Muhammad Yousefnezhad ◽  
Daoqiang Zhang

AbstractMultivariate Pattern (MVP) classification can map different cognitive states to the brain tasks. One of the main challenges in MVP analysis is validating the generated results across subjects. However, analyzing multi-subject fMRI data requires accurate functional alignments between neuronal activities of different subjects, which can rapidly increase the performance and robustness of the final results. Hyperalignment (HA) is one of the most effective functional alignment methods, which can be mathematically formulated by the Canonical Correlation Analysis (CCA) methods. Since HA mostly uses the unsupervised CCA techniques, its solution may not be optimized for MVP analysis. By incorporating the idea of Local Discriminant Analysis (LDA) into CCA, this paper proposes Local Discriminant Hyperalignment (LDHA) as a novel supervised HA method, which can provide better functional alignment for MVP analysis. Indeed, the locality is defined based on the stimuli categories in the train-set, where the correlation between all stimuli in the same category will be maximized and the correlation between distinct categories of stimuli approaches to near zero. Experimental studies on multi-subject MVP analysis confirm that the LDHA method achieves superior performance to other state-of-the-art HA algorithms.


2021 ◽  
Vol 320 ◽  
pp. 03004
Author(s):  
Aleksander Moskalev ◽  
Nikita Tsygankov

Digitalization entails the application of digital technologies to a wide range of existing tasks and enables solution of new tasks. This article is based on data sets on current sales volumes of innovative products to forecast their future sales. The first stage is applying the innovation diffusion in the F. Bass model to calculate the diffusion coefficients of different modifications of Sony’s PlayStation. To estimate factors which influence innovation and related results within firms, the company IMPRINTA producing 3D printers was surveyed. It is shown that taking into account technical parameters, a cascade diffusion model of product innovation is the best for describing the process of product realization. The information obtained from the diffusion analysis of two 3D printer models can be used to improve the efficiency of key business processes, including production, procurement, marketing and advertising. The use of the diffusion model made it possible to generate three different scenarios for the release and promotion of a new modification of one of the 3D printer models depending on the selected niche, the time of market launch and the intensity of the marketing and advertising campaign. Each scenario enables adjusting the cost and technical parameters of the future modification.


2012 ◽  
Vol 12 (4) ◽  
pp. 77-94 ◽  
Author(s):  
Hari Seetha ◽  
R. Saravanan ◽  
M. Narasimha Murty

Abstract Support Vector Machines (SVMs) have gained prominence because of their high generalization ability for a wide range of applications. However, the size of the training data that it requires to achieve a commendable performance becomes extremely large with increasing dimensionality using RBF and polynomial kernels. Synthesizing new training patterns curbs this effect. In this paper, we propose a novel multiple kernel learning approach to generate a synthetic training set which is larger than the original training set. This method is evaluated on seven of the benchmark datasets and experimental studies showed that SVM classifier trained with synthetic patterns has demonstrated superior performance over the traditional SVM classifier.


Author(s):  
Eddie B. Prestridge

The modern workstation, with its inherent high speed and massive memory, allows the use of complicated image analysis and processing algorithms and artificial intelligence programs not available to current PCs and mini-computers.There is no single feature that differentiates the UNIX based workstation from a PC. Rather, it is its fundamental design that gives the workstation superior performance. This superiority in no small part is the result of the UNIX operating system. UNIX is a word that sometimes strikes fear into the hearts of mortal men! There is no reason to fear, however. Users of UNIX-based systems have no more interaction with the operating system than say users of WordPerfect running on a MS-DOS or MAC OS machine.UNIX is the only operating system that runs on a wide range of systems from laptops to mainframes. It is unique in at least five areas: 1) massive data sets handling, 2) multitasking, 3) multi-user, 4) security, and 5) networking. Let me give you what I think are usable descriptions for these five areas.


Author(s):  
Nataliya Stoyanets ◽  
◽  
Mathias Onuh Aboyi ◽  

The article defines that for the successful implementation of an innovative project and the introduction of a new product into production it is necessary to use advanced technologies and modern software, which is an integral part of successful innovation by taking into account the life cycle of innovations. It is proposed to consider the general potential of the enterprise through its main components, namely: production and technological, scientific and technical, financial and economic, personnel and actual innovation potential. Base for the introduction of technological innovations LLC "ALLIANCE- PARTNER", which provides a wide range of support and consulting services, services in the employment market, tourism, insurance, translation and more. To form a model of innovative development of the enterprise, it is advisable to establish the following key aspects: the system of value creation through the model of cooperation with partners and suppliers; creating a value chain; technological platform; infrastructure, determine the cost of supply, the cost of activities for customers and for the enterprise as a whole. The system of factors of influence on formation of model of strategic innovative development of the enterprise is offered. The expediency of the cost of the complex of technological equipment, which is 6800.0 thousand UAH, is economically calculated. Given the fact that the company plans to receive funds under the program of socio-economic development of Sumy region, the evaluation of the effectiveness of the innovation project, the purchase of technological equipment, it is determined that the payback period of the project is 3 years 10 months. In terms of net present value (NPV), the project under study is profitable. The project profitability index (PI) meets the requirements for a positive decision on project implementation> 1.0. The internal rate of return of the project (IRR) also has a positive value of 22% because it exceeds the discount rate.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 62
Author(s):  
Zhengwei Liu ◽  
Fukang Zhu

The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Eleanor F. Miller ◽  
Andrea Manica

Abstract Background Today an unprecedented amount of genetic sequence data is stored in publicly available repositories. For decades now, mitochondrial DNA (mtDNA) has been the workhorse of genetic studies, and as a result, there is a large volume of mtDNA data available in these repositories for a wide range of species. Indeed, whilst whole genome sequencing is an exciting prospect for the future, for most non-model organisms’ classical markers such as mtDNA remain widely used. By compiling existing data from multiple original studies, it is possible to build powerful new datasets capable of exploring many questions in ecology, evolution and conservation biology. One key question that these data can help inform is what happened in a species’ demographic past. However, compiling data in this manner is not trivial, there are many complexities associated with data extraction, data quality and data handling. Results Here we present the mtDNAcombine package, a collection of tools developed to manage some of the major decisions associated with handling multi-study sequence data with a particular focus on preparing sequence data for Bayesian skyline plot demographic reconstructions. Conclusions There is now more genetic information available than ever before and large meta-data sets offer great opportunities to explore new and exciting avenues of research. However, compiling multi-study datasets still remains a technically challenging prospect. The mtDNAcombine package provides a pipeline to streamline the process of downloading, curating, and analysing sequence data, guiding the process of compiling data sets from the online database GenBank.


Sign in / Sign up

Export Citation Format

Share Document