A Synthetic Agent for Mentoring Novice Programmers Within a Desktop Computer Environment

Author(s):  
Desmond Case ◽  
Bernadette Sharp ◽  
Peter J. King

2020 ◽  
Author(s):  
Jesus Lopez ◽  
Joseph M Orr

Media multitasking (e.g., listening to podcasts while studying) has been linked to decreased executive functioning. However, the tasks used to establish this finding do not approximate a real-world volitional multitasking environment. A novel experimental framework was designed to mimic a desktop computer environment where a “popup” associated with a secondary task would occasionally appear. Participants could select the popup and perform a difficult word stem completion trial or ignore the popup and continue performing the primary task which consisted of math problems. We predicted that individuals who are more impulsive, more frequent media multitaskers, and individuals who prefer to multitask(quantified with self-report questionnaires) would be more distracted by the popups, choose to perform the secondary task more often, and be slower to return to the primary task compared to those who media multitask to a lesser degree. We found that as individuals media multitask to a greater extent, they are slower to return to the previous (primary) task set and are slower to complete the primary task overall whether a popup was present or not, among other task performance measures. Our findings suggest that overall, more frequent media multitaskers show a marginal decrease in task performance, including an increased return cost, but those who prefer to multitask show the opposite pattern of effects with some performance measures. Impulsivity was not found to influence any task performance measures. Further iterations of this paradigm are necessary to elucidate the relationship between media multitasking and task performance, if one exists.



Geophysics ◽  
1999 ◽  
Vol 64 (4) ◽  
pp. 1108-1115 ◽  
Author(s):  
Warren T. Wood

Estimates of the source wavelet and band‐limited earth reflectivity are obtained simultaneously from an optimization of deconvolution outputs, similar to minimum‐entropy deconvolution (MED). The only inputs required beyond the observed seismogram are wavelet length and an inversion parameter (cooling rate). The objective function to be minimized is a measure of the spikiness of the deconvolved seismogram. I assume that the wavelet whose deconvolution from the data results in the most spike‐like trace is the best wavelet estimate. Because this is a highly nonlinear problem, simulated annealing is used to solve it. The procedure yields excellent results on synthetic data and disparate field data sets, is robust in the presence of noise, and is fast enough to operate in a desktop computer environment.



2020 ◽  
Author(s):  
Lukas M. Simon ◽  
Yin-Ying Wang ◽  
Zhongming Zhao

AbstractEfficient integration of heterogeneous and increasingly large single cell RNA sequencing (scRNA-seq) data poses a major challenge for analysis and in particular, comprehensive atlasing efforts. Here, we developed a novel deep learning algorithm to overcome batch effects using batch-aware triplet neural networks, called INSCT (“Insight”). Using simulated and real data, we demonstrate that INSCT generates an embedding space which accurately integrates cells across experiments, platforms and species. Our benchmark comparisons with current state-of-the-art scRNA-seq integration methods revealed that INSCT outperforms competing methods in scalability while achieving comparable accuracies. Moreover, using INSCT in semi-supervised mode enables users to classify unlabeled cells by projecting them into a reference collection of annotated cells. To demonstrate scalability, we applied INSCT to integrate more than 2.6 million transcriptomes from four independent studies of mouse brains in less than 1.5 hours using less than 25 gigabytes of memory. This feature empowers researchers to perform atlasing scale data integration in a typical desktop computer environment. INSCT is freely available at https://github.com/lkmklsmn/insct.HighlightsINSCT accurately integrates multiple scRNA-seq datasetsINSCT accurately predicts cell types for an independent scRNA-seq datasetEfficient deep learning framework enables integration of millions of cells on a personal computer



Author(s):  
Hristo Terziev

Internet of Things is a new world for connecting object space in the real world with virtual space in a computer environment. To build IoT as an effective service platform, end users need to trust the system. With the growing quantity of information and communication technologies, the need to ensure information security and improve data security is increasing. One of the potential solutions for this are steganographic methods. Steganography based on the least significant bit (LSB) is a popular and widely used method in the spatial domain.



2009 ◽  
Author(s):  
Robert J. Pleban ◽  
Jennifer S. Tucker ◽  
Vanessa Johnson Katie /Gunther ◽  
Thomas R. Graves


Author(s):  
Robert Calatayud ◽  
Enrique Navarro-Modesto ◽  
Enrique A. Navarro-Camba ◽  
Nagula T. Sangary


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Joshua T. Vogelstein ◽  
Eric W. Bridgeford ◽  
Minh Tang ◽  
Da Zheng ◽  
Christopher Douville ◽  
...  

AbstractTo solve key biomedical problems, experimentalists now routinely measure millions or billions of features (dimensions) per sample, with the hope that data science techniques will be able to build accurate data-driven inferences. Because sample sizes are typically orders of magnitude smaller than the dimensionality of these data, valid inferences require finding a low-dimensional representation that preserves the discriminating information (e.g., whether the individual suffers from a particular disease). There is a lack of interpretable supervised dimensionality reduction methods that scale to millions of dimensions with strong statistical theoretical guarantees. We introduce an approach to extending principal components analysis by incorporating class-conditional moment estimates into the low-dimensional projection. The simplest version, Linear Optimal Low-rank projection, incorporates the class-conditional means. We prove, and substantiate with both synthetic and real data benchmarks, that Linear Optimal Low-Rank Projection and its generalizations lead to improved data representations for subsequent classification, while maintaining computational efficiency and scalability. Using multiple brain imaging datasets consisting of more than 150 million features, and several genomics datasets with more than 500,000 features, Linear Optimal Low-Rank Projection outperforms other scalable linear dimensionality reduction techniques in terms of accuracy, while only requiring a few minutes on a standard desktop computer.





Sign in / Sign up

Export Citation Format

Share Document