scholarly journals Understanding Dyslexia Through Personalized Large-Scale Computational Models

2019 ◽  
Vol 30 (3) ◽  
pp. 386-395 ◽  
Author(s):  
Conrad Perry ◽  
Marco Zorzi ◽  
Johannes C. Ziegler

Learning to read is foundational for literacy development, yet many children in primary school fail to become efficient readers despite normal intelligence and schooling. This condition, referred to as developmental dyslexia, has been hypothesized to occur because of deficits in vision, attention, auditory and temporal processes, and phonology and language. Here, we used a developmentally plausible computational model of reading acquisition to investigate how the core deficits of dyslexia determined individual learning outcomes for 622 children (388 with dyslexia). We found that individual learning trajectories could be simulated on the basis of three component skills related to orthography, phonology, and vocabulary. In contrast, single-deficit models captured the means but not the distribution of reading scores, and a model with noise added to all representations could not even capture the means. These results show that heterogeneity and individual differences in dyslexia profiles can be simulated only with a personalized computational model that allows for multiple deficits.

2020 ◽  
Vol 29 (3) ◽  
pp. 293-300
Author(s):  
Johannes C. Ziegler ◽  
Conrad Perry ◽  
Marco Zorzi

How do children learn to read? How do deficits in various components of the reading network affect learning outcomes? How does remediating one or several components change reading performance? In this article, we summarize what is known about learning to read and how this can be formalized in a developmentally plausible computational model of reading acquisition. The model is used to understand normal and impaired reading development (dyslexia). In particular, we show that it is possible to simulate individual learning trajectories and intervention outcomes on the basis of three component skills: orthography, phonology, and vocabulary. We therefore advocate a multifactorial computational approach to understanding reading that has practical implications for dyslexia and intervention.


2018 ◽  
Vol 23 (2) ◽  
pp. 238-274 ◽  
Author(s):  
Jeffrey B. Vancouver ◽  
Mo Wang ◽  
Xiaofei Li

Theories are the core of any science, but many imprecisely stated theories in organizational and management science are hampering progress in the field. Computational modeling of existing theories can help address the issue. Computational models are a type of formal theory that are represented mathematically or by other formal logic and can be simulated, allowing theorists to assess whether the theory can explain the phenomena intended as well as make testable predictions. As an example of the process, Locke’s integrated model of work motivation is translated into static and dynamic computational models. Simulations of these models are compared to the empirical data used to develop and test the theory. For the static model, the simulations revealed largely strong associations with robust empirical findings. However, adding dynamics created several challenges to key precepts of the theory. Moreover, the effort revealed where empirical work is needed to further refine or refute the theory. Discussion focuses on the value of computational modeling as a method for formally testing, pruning, and extending extant theories in the field.


2018 ◽  
Author(s):  
Joseph DeWilde ◽  
Esha Rangnekar ◽  
Jeffrey Ting ◽  
Joseph Franek ◽  
Frank S. Bates ◽  
...  

A biannual chemistry demonstration-based show named “Energy and U” was created to extend the general outreach themes of STEM fields and a college education with a specific goal: to teach the First Law of Thermodynamics to elementary school students. Energy is a central concept in chemical education, most STEM disciplines, and it is the concept at the foundation of many of the greatest challenges faced by society today. The effectiveness of the program was analyzed using a clicker survey system. This study provides one of the first examples of incorporating real-time feedback into large- scale chemistry-based outreach events for elementary school students in order to quantify and better understand the broader impact and learning outcomes.


2018 ◽  
Vol 16 (1) ◽  
pp. 67-76
Author(s):  
Disyacitta Neolia Firdana ◽  
Trimurtini Trimurtini

This research aimed to determine the properness and effectiveness of the big book media on learning equivalent fractions of fourth grade students. The method of research is Research and Development  (R&D). This study was conducted in fourth grade of SDN Karanganyar 02 Kota Semarang. Data sources from media validation, material validation, learning outcomes, and teacher and students responses on developed media. Pre-experimental research design with one group pretest-posttest design. Big book developed consist of equivalent fractions material, students learning activities sheets with rectangle and circle shape pictures, and questions about equivalent fractions. Big book was developed based on students and teacher needs. This big book fulfill the media validity of 3,75 with very good criteria and scored 3 by material experts with good criteria. In large-scale trial, the result of students posttest have learning outcomes completness 82,14%. The result of N-gain calculation with result 0,55 indicates the criterion “medium”. The t-test result 9,6320 > 2,0484 which means the average of posttest outcomes is better than the average of pretest outcomes. Based on that data, this study has produced big book media which proper and effective as a media of learning equivalent fractions of fourth grade elementary school.


2020 ◽  
Vol 27 ◽  
Author(s):  
Zaheer Ullah Khan ◽  
Dechang Pi

Background: S-sulfenylation (S-sulphenylation, or sulfenic acid) proteins, are special kinds of post-translation modification, which plays an important role in various physiological and pathological processes such as cytokine signaling, transcriptional regulation, and apoptosis. Despite these aforementioned significances, and by complementing existing wet methods, several computational models have been developed for sulfenylation cysteine sites prediction. However, the performance of these models was not satisfactory due to inefficient feature schemes, severe imbalance issues, and lack of an intelligent learning engine. Objective: In this study, our motivation is to establish a strong and novel computational predictor for discrimination of sulfenylation and non-sulfenylation sites. Methods: In this study, we report an innovative bioinformatics feature encoding tool, named DeepSSPred, in which, resulting encoded features is obtained via n-segmented hybrid feature, and then the resampling technique called synthetic minority oversampling was employed to cope with the severe imbalance issue between SC-sites (minority class) and non-SC sites (majority class). State of the art 2DConvolutional Neural Network was employed over rigorous 10-fold jackknife cross-validation technique for model validation and authentication. Results: Following the proposed framework, with a strong discrete presentation of feature space, machine learning engine, and unbiased presentation of the underline training data yielded into an excellent model that outperforms with all existing established studies. The proposed approach is 6% higher in terms of MCC from the first best. On an independent dataset, the existing first best study failed to provide sufficient details. The model obtained an increase of 7.5% in accuracy, 1.22% in Sn, 12.91% in Sp and 13.12% in MCC on the training data and12.13% of ACC, 27.25% in Sn, 2.25% in Sp, and 30.37% in MCC on an independent dataset in comparison with 2nd best method. These empirical analyses show the superlative performance of the proposed model over both training and Independent dataset in comparison with existing literature studies. Conclusion : In this research, we have developed a novel sequence-based automated predictor for SC-sites, called DeepSSPred. The empirical simulations outcomes with a training dataset and independent validation dataset have revealed the efficacy of the proposed theoretical model. The good performance of DeepSSPred is due to several reasons, such as novel discriminative feature encoding schemes, SMOTE technique, and careful construction of the prediction model through the tuned 2D-CNN classifier. We believe that our research work will provide a potential insight into a further prediction of S-sulfenylation characteristics and functionalities. Thus, we hope that our developed predictor will significantly helpful for large scale discrimination of unknown SC-sites in particular and designing new pharmaceutical drugs in general.


2021 ◽  
Vol 15 ◽  
Author(s):  
Lichao Zhang ◽  
Zihong Huang ◽  
Liang Kong

Background: RNA-binding proteins establish posttranscriptional gene regulation by coordinating the maturation, editing, transport, stability, and translation of cellular RNAs. The immunoprecipitation experiments could identify interaction between RNA and proteins, but they are limited due to the experimental environment and material. Therefore, it is essential to construct computational models to identify the function sites. Objective: Although some computational methods have been proposed to predict RNA binding sites, the accuracy could be further improved. Moreover, it is necessary to construct a dataset with more samples to design a reliable model. Here we present a computational model based on multi-information sources to identify RNA binding sites. Method: We construct an accurate computational model named CSBPI_Site, based on xtreme gradient boosting. The specifically designed 15-dimensional feature vector captures four types of information (chemical shift, chemical bond, chemical properties and position information). Results: The satisfied accuracy of 0.86 and AUC of 0.89 were obtained by leave-one-out cross validation. Meanwhile, the accuracies were slightly different (range from 0.83 to 0.85) among three classifiers algorithm, which showed the novel features are stable and fit to multiple classifiers. These results showed that the proposed method is effective and robust for noncoding RNA binding sites identification. Conclusion: Our method based on multi-information sources is effective to represent the binding sites information among ncRNAs. The satisfied prediction results of Diels-Alder riboz-yme based on CSBPI_Site indicates that our model is valuable to identify the function site.


2021 ◽  
Vol 11 (4) ◽  
pp. 1817
Author(s):  
Zheng Li ◽  
Azure Wilson ◽  
Lea Sayce ◽  
Amit Avhad ◽  
Bernard Rousseau ◽  
...  

We have developed a novel surgical/computational model for the investigation of unilat-eral vocal fold paralysis (UVFP) which will be used to inform future in silico approaches to improve surgical outcomes in type I thyroplasty. Healthy phonation (HP) was achieved using cricothyroid suture approximation on both sides of the larynx to generate symmetrical vocal fold closure. Following high-speed videoendoscopy (HSV) capture, sutures on the right side of the larynx were removed, partially releasing tension unilaterally and generating asymmetric vocal fold closure characteristic of UVFP (sUVFP condition). HSV revealed symmetric vibration in HP, while in sUVFP the sutured side demonstrated a higher frequency (10–11%). For the computational model, ex vivo magnetic resonance imaging (MRI) scans were captured at three configurations: non-approximated (NA), HP, and sUVFP. A finite-element method (FEM) model was built, in which cartilage displacements from the MRI images were used to prescribe the adduction, and the vocal fold deformation was simulated before the eigenmode calculation. The results showed that the frequency comparison between the two sides was consistent with observations from HSV. This alignment between the surgical and computational models supports the future application of these methods for the investigation of treatment for UVFP.


Cancers ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 2111
Author(s):  
Bo-Wei Zhao ◽  
Zhu-Hong You ◽  
Lun Hu ◽  
Zhen-Hao Guo ◽  
Lei Wang ◽  
...  

Identification of drug-target interactions (DTIs) is a significant step in the drug discovery or repositioning process. Compared with the time-consuming and labor-intensive in vivo experimental methods, the computational models can provide high-quality DTI candidates in an instant. In this study, we propose a novel method called LGDTI to predict DTIs based on large-scale graph representation learning. LGDTI can capture the local and global structural information of the graph. Specifically, the first-order neighbor information of nodes can be aggregated by the graph convolutional network (GCN); on the other hand, the high-order neighbor information of nodes can be learned by the graph embedding method called DeepWalk. Finally, the two kinds of feature are fed into the random forest classifier to train and predict potential DTIs. The results show that our method obtained area under the receiver operating characteristic curve (AUROC) of 0.9455 and area under the precision-recall curve (AUPR) of 0.9491 under 5-fold cross-validation. Moreover, we compare the presented method with some existing state-of-the-art methods. These results imply that LGDTI can efficiently and robustly capture undiscovered DTIs. Moreover, the proposed model is expected to bring new inspiration and provide novel perspectives to relevant researchers.


2017 ◽  
Vol 17 (1) ◽  
pp. 51-70
Author(s):  
Laurence Marty ◽  
Patrice Venturini ◽  
Jonas Almqvist

Classroom actions rely, among other things, on teaching habits and traditions. Previous research has clarified three different teaching traditions in science education: the academic tradition builds on the idea that simply the products and methods of science are worth teaching; the applied tradition focuses on students’ ability to use scientific knowledge and skills in their everyday life; and the moral tradition opens up a relationship between science and society, focusing on students’ decision making concerning socio scientific issues. The aim of this paper is to identify and discuss similarities and differences between the science curricula in Sweden, France and Western Switzerland in terms of teaching traditions. The study considers the following dimensions in the analysis: (1) the goals of science education as presented in the initial recommendations of the curricula; (2) the organization and division of the core contents; and (3) the learning outcomes expected from the students in terms of concepts, skills and/or scientific literacy requirements. Although the three traditions are taken into account within the various initial recommendations, the place they occupy in the content to be taught is different in each case. In the Swedish curriculum, our analyses show that the three traditions are embedded in the initial recommendations and in the expected outcomes. On the other hand, in the Western-Swiss and French curricula, the three traditions are embedded in the initial recommendations but only academic tradition can be found in the expected outcomes. Therefore, the Swedish curriculum seems to be more consistent regarding teaching traditions. This may have some consequences on teaching and learning practices, which will be discussed in the article. Moreover, our analyses enable us to put forward definitions of teaching tradition.


Sign in / Sign up

Export Citation Format

Share Document