multiple neural networks
Recently Published Documents


TOTAL DOCUMENTS

164
(FIVE YEARS 36)

H-INDEX

18
(FIVE YEARS 4)

2021 ◽  
Vol 9 (4) ◽  
pp. 59
Author(s):  
Yadwinder Kaur ◽  
Selina Weiss ◽  
Changsong Zhou ◽  
Rico Fischer ◽  
Andrea Hildebrandt

Functional connectivity studies have demonstrated that creative thinking builds upon an interplay of multiple neural networks involving the cognitive control system. Theoretically, cognitive control has generally been discussed as the common basis underlying the positive relationship between creative thinking and intelligence. However, the literature still lacks a detailed investigation of the association patterns between cognitive control, the factors of creative thinking as measured by divergent thinking (DT) tasks, i.e., fluency and originality, and intelligence, both fluid and crystallized. In the present study, we explored these relationships at the behavioral and the neural level, based on N = 77 young adults. We focused on brain-signal complexity (BSC), parameterized by multi-scale entropy (MSE), as measured during a verbal DT and a cognitive control task. We demonstrated that MSE is a sensitive neural indicator of originality as well as inhibition. Then, we explore the relationships between MSE and factor scores indicating DT and intelligence. In a series of across-scalp analyses, we show that the overall MSE measured during a DT task, as well as MSE measured in cognitive control states, are associated with fluency and originality at specific scalp locations, but not with fluid and crystallized intelligence. The present explorative study broadens our understanding of the relationship between creative thinking, intelligence, and cognitive control from the perspective of BSC and has the potential to inspire future BSC-related theories of creative thinking.


2021 ◽  
Vol 12 ◽  
Author(s):  
Renee Miller ◽  
Eric Kerfoot ◽  
Charlène Mauger ◽  
Tevfik F. Ismail ◽  
Alistair A. Young ◽  
...  

Parameterised patient-specific models of the heart enable quantitative analysis of cardiac function as well as estimation of regional stress and intrinsic tissue stiffness. However, the development of personalised models and subsequent simulations have often required lengthy manual setup, from image labelling through to generating the finite element model and assigning boundary conditions. Recently, rapid patient-specific finite element modelling has been made possible through the use of machine learning techniques. In this paper, utilising multiple neural networks for image labelling and detection of valve landmarks, together with streamlined data integration, a pipeline for generating patient-specific biventricular models is applied to clinically-acquired data from a diverse cohort of individuals, including hypertrophic and dilated cardiomyopathy patients and healthy volunteers. Valve motion from tracked landmarks as well as cavity volumes measured from labelled images are used to drive realistic motion and estimate passive tissue stiffness values. The neural networks are shown to accurately label cardiac regions and features for these diverse morphologies. Furthermore, differences in global intrinsic parameters, such as tissue anisotropy and normalised active tension, between groups illustrate respective underlying changes in tissue composition and/or structure as a result of pathology. This study shows the successful application of a generic pipeline for biventricular modelling, incorporating artificial intelligence solutions, within a diverse cohort.


2021 ◽  
Author(s):  
Martin Mirbauer ◽  
Miroslav Krabec ◽  
Jaroslav Křivánek ◽  
Elena Šikudová

<div> <div> <div> <p>Classification of 3D objects – the selection of a category in which each object belongs – is of great interest in the field of machine learning. Numerous researchers use deep neural networks to address this problem, altering the network architecture and representation of the 3D shape used as an input. To investigate the effectiveness of their approaches, we conduct an extensive survey of existing methods and identify common ideas by which we categorize them into a taxonomy. Second, we evaluate 11 selected classification networks on three 3D object datasets, extending the evaluation to a larger dataset on which most of the selected approaches have not been tested yet. For this, we provide a framework for converting shapes from common 3D mesh formats into formats native to each network, and for training and evaluating different classification approaches on this data. Despite being generally unable to reach the accuracies reported in the original papers, we can compare the relative performance of the approaches as well as their performance when changing datasets as the only variable to provide valuable insights into performance on different kinds of data. We make our code available to simplify running training experiments with multiple neural networks with different prerequisites. </p> </div> </div> </div>


2021 ◽  
Author(s):  
Martin Mirbauer ◽  
Miroslav Krabec ◽  
Jaroslav Křivánek ◽  
Elena Šikudová

<div> <div> <div> <p>Classification of 3D objects – the selection of a category in which each object belongs – is of great interest in the field of machine learning. Numerous researchers use deep neural networks to address this problem, altering the network architecture and representation of the 3D shape used as an input. To investigate the effectiveness of their approaches, we conduct an extensive survey of existing methods and identify common ideas by which we categorize them into a taxonomy. Second, we evaluate 11 selected classification networks on three 3D object datasets, extending the evaluation to a larger dataset on which most of the selected approaches have not been tested yet. For this, we provide a framework for converting shapes from common 3D mesh formats into formats native to each network, and for training and evaluating different classification approaches on this data. Despite being generally unable to reach the accuracies reported in the original papers, we can compare the relative performance of the approaches as well as their performance when changing datasets as the only variable to provide valuable insights into performance on different kinds of data. We make our code available to simplify running training experiments with multiple neural networks with different prerequisites. </p> </div> </div> </div>


Sign in / Sign up

Export Citation Format

Share Document