Structurally Layered Representation Learning: Towards Deep Learning Through Genetic Programming

Author(s):  
Lino Rodriguez-Coayahuitl ◽  
Alicia Morales-Reyes ◽  
Hugo Jair Escalante
Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4486
Author(s):  
Niall O’Mahony ◽  
Sean Campbell ◽  
Lenka Krpalkova ◽  
Anderson Carvalho ◽  
Joseph Walsh ◽  
...  

Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.


2020 ◽  
Author(s):  
Junwen Luo ◽  
Yi Cai ◽  
Jialin Wu ◽  
Hongmin Cai ◽  
Xiaofeng Yang ◽  
...  

AbstractIn recent years, deep learning has been increasingly used to decipher the relationships among protein sequence, structure, and function. Thus far deep learning of proteins has mostly utilized protein primary sequence information, while the vast amount of protein tertiary structural information remains unused. In this study, we devised a self-supervised representation learning framework to extract the fundamental features of unlabeled protein tertiary structures (PtsRep), and the embedded representations were transferred to two commonly recognized protein engineering tasks, protein stability and GFP fluorescence prediction. On both tasks, PtsRep significantly outperformed the two benchmark methods (UniRep and TAPE-BERT), which are based on protein primary sequences. Protein clustering analyses demonstrated that PtsRep can capture the structural signals in proteins. PtsRep reveals an avenue for general protein structural representation learning, and for exploring protein structural space for protein engineering and drug design.


Author(s):  
Hedi Ben-younes ◽  
Remi Cadene ◽  
Nicolas Thome ◽  
Matthieu Cord

Multimodal representation learning is gaining more and more interest within the deep learning community. While bilinear models provide an interesting framework to find subtle combination of modalities, their number of parameters grows quadratically with the input dimensions, making their practical implementation within classical deep learning pipelines challenging. In this paper, we introduce BLOCK, a new multimodal fusion based on the block-superdiagonal tensor decomposition. It leverages the notion of block-term ranks, which generalizes both concepts of rank and mode ranks for tensors, already used for multimodal fusion. It allows to define new ways for optimizing the tradeoff between the expressiveness and complexity of the fusion model, and is able to represent very fine interactions between modalities while maintaining powerful mono-modal representations. We demonstrate the practical interest of our fusion model by using BLOCK for two challenging tasks: Visual Question Answering (VQA) and Visual Relationship Detection (VRD), where we design end-to-end learnable architectures for representing relevant interactions between modalities. Through extensive experiments, we show that BLOCK compares favorably with respect to state-of-the-art multimodal fusion models for both VQA and VRD tasks. Our code is available at https://github.com/Cadene/block.bootstrap.pytorch.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Jun Jin Choong ◽  
Xin Liu ◽  
Tsuyoshi Murata

Discovering and modeling community structure exist to be a fundamentally challenging task. In domains such as biology, chemistry, and physics, researchers often rely on community detection algorithms to uncover community structures from complex systems yet no unified definition of community structure exists. Furthermore, existing models tend to be oversimplified leading to a neglect of richer information such as nodal features. Coupled with the surge of user generated information on social networks, a demand for newer techniques beyond traditional approaches is inevitable. Deep learning techniques such as network representation learning have shown tremendous promise. More specifically, supervised and semisupervised learning tasks such as link prediction and node classification have achieved remarkable results. However, unsupervised learning tasks such as community detection remain widely unexplored. In this paper, a novel deep generative model for community detection is proposed. Extensive experiments show that the proposed model, empowered with Bayesian deep learning, can provide insights in terms of uncertainty and exploit nonlinearities which result in better performance in comparison to state-of-the-art community detection methods. Additionally, unlike traditional methods, the proposed model is community structure definition agnostic. Leveraging on low-dimensional embeddings of both network topology and feature similarity, it automatically learns the best model configuration for describing similarities in a community.


2016 ◽  
Vol 2 (4) ◽  
pp. 265-278 ◽  
Author(s):  
Guoqiang Zhong ◽  
Li-Na Wang ◽  
Xiao Ling ◽  
Junyu Dong

Author(s):  
Shany Biton ◽  
Sheina Gendelman ◽  
Antônio H Ribeiro ◽  
Gabriela Miana ◽  
Carla Moreira ◽  
...  

Abstract Aims This study aims to assess whether information derived from the raw 12-lead electrocardiogram (ECG) combined with clinical information is predictive of atrial fibrillation (AF) development. Methods We use a subset of the Telehealth Network of Minas Gerais (TNMG) database consisting of patients that had repeated 12-lead ECG measurements between 2010-2017 that is 1,130,404 recordings from 415,389 unique patients. Median and interquartile of age for the recordings were 58 (46-69) and 38% of the patients were males. Recordings were assigned to train-validation and test sets in an 80:20% split which was stratified by class, age and gender. A random forest classifier was trained to predict, for a given recording, the risk of AF development within 5-years. We use features obtained from different modalities, namely demographics, clinical information, engineered features, and features from deep representation learning. Results The best model performance on the test set was obtained for the model combining features from all modalities with an AUROC=0.909 against the best single modality model which had an AUROC=0.839. Conclusion Our study has important clinical implications for AF management. It is the first study integrating feature engineering, deep learning and EMR metadata to create a risk prediction tool for the management of patients at risk of AF. The best model that includes features from all modalities demonstrates that human knowledge in electrophysiology combined with deep learning outperforms any single modality approach. The high performance obtained suggest that structural changes in the 12-lead ECG are associated with existing or impending AF.


2017 ◽  
Author(s):  
YoungJu Jo ◽  
Sangjin Park ◽  
JaeHwang Jung ◽  
Jonghee Yoon ◽  
Hosung Joo ◽  
...  

AbstractEstablishing early warning systems for anthrax attacks is crucial in biodefense. Here we present an optical method for rapid screening ofBacillus anthracisspores through the synergistic application of holographic microscopy and deep learning. A deep convolutional neural network is designed to classify holographic images of unlabeled living cells. After training, the network outperforms previous techniques in all accuracy measures, achieving single-spore sensitivity and sub-genus specificity. The unique ‘representation learning’ capability of deep learning enables direct training fromraw imagesinstead of manually extracted features. The method automatically recognizes key biological traits encoded in the images and exploits them as fingerprints. This remarkable learning ability makes the proposed method readily applicable to classifying various single cells in addition toB. anthracis, as demonstrated for the diagnosis ofListeria monocytogenes, without any modification. We believe that our strategy will make holographic microscopy more accessible to medical doctors and biomedical scientists for easy, rapid, and accurate diagnosis of pathogens, and facilitate exciting new applications.


Sign in / Sign up

Export Citation Format

Share Document