scholarly journals Table2Vec-automated universal representation learning of enterprise data DNA for benchmarkable and explainable enterprise data science

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Longbing Cao ◽  
Chengzhang Zhu

AbstractEnterprise data typically involves multiple heterogeneous data sources and external data that respectively record business activities, transactions, customer demographics, status, behaviors, interactions and communications with the enterprise, and the consumption and feedback of its products, services, production, marketing, operations, and management, etc. They involve enterprise DNA associated with domain-oriented transactions and master data, informational and operational metadata, and relevant external data. A critical challenge in enterprise data science is to enable an effective ‘whole-of-enterprise’ data understanding and data-driven discovery and decision-making on all-round enterprise DNA. Accordingly, here we introduce a neural encoder Table2Vec for automated universal representation learning of entities such as customers from all-round enterprise DNA with automated data characteristics analysis and data quality augmentation. The learned universal representations serve as representative and benchmarkable enterprise data genomes (similar to biological genomes and DNA in organisms) and can be used for enterprise-wide and domain-specific learning tasks. Table2Vec integrates automated universal representation learning on low-quality enterprise data and downstream learning tasks. Such automated universal enterprise representation and learning cannot be addressed by existing enterprise data warehouses (EDWs), business intelligence and corporate analytics systems, where ‘enterprise big tables’ are constructed with reporting and analytics conducted by specific analysts on respective domain subjects and goals. It addresses critical limitations and gaps of existing representation learning, enterprise analytics and cloud analytics, which are analytical subject, task and data-specific, creating analytical silos in an enterprise. We illustrate Table2Vec in characterizing all-round customer data DNA in an enterprise on complex heterogeneous multi-relational big tables to build universal customer vector representations. The learned universal representation of each customer is all-round, representative and benchmarkable to support both enterprise-wide and domain-specific learning goals and tasks in enterprise data science. Table2Vec significantly outperforms the existing shallow, boosting and deep learning methods typically used for enterprise analytics. We further discuss the research opportunities, directions and applications of automated universal enterprise representation and learning and the learned enterprise data DNA for automated, all-purpose, whole-of-enterprise and ethical machine learning and data science.

2020 ◽  
Vol 34 (05) ◽  
pp. 9217-9224
Author(s):  
Tianyi Wang ◽  
Yating Zhang ◽  
Xiaozhong Liu ◽  
Changlong Sun ◽  
Qiong Zhang

Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc. While dialogue corpora are abundantly available, labeled data, for specific learning tasks, can be highly scarce and expensive. In this work, we investigate dialogue context representation learning with various types unsupervised pretraining tasks where the training objectives are given naturally according to the nature of the utterance and the structure of the multi-role conversation. Meanwhile, in order to locate essential information for dialogue summarization/extraction, the pretraining process enables external knowledge integration. The proposed fine-tuned pretraining mechanism is comprehensively evaluated via three different dialogue datasets along with a number of downstream dialogue-mining tasks. Result shows that the proposed pretraining mechanism significantly contributes to all the downstream tasks without discrimination to different encoders.


Author(s):  
Kristen M. Getchell ◽  
Dessislava A. Pachamanova

Drawing on the scholarship of writing and learning, this article motivates the use of writing assignments in analytics courses and develops a framework for instructional design that advances both writing skills and discipline-specific learning. We translate a best practices set of foundational writing concepts into a matrix of design levers for analytics instructors and propose an instructional design process that balances discipline-specific learning goals with foundational writing concepts through specific writing activities. We summarize our experience applying the framework to a particular data science course and present some early evidence for favorable outcomes. The positive effect we observe extends beyond learning course concepts and includes increased student engagement and contributions to group work.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mario Zanfardino ◽  
Rossana Castaldo ◽  
Katia Pane ◽  
Ornella Affinito ◽  
Marco Aiello ◽  
...  

AbstractAnalysis of large-scale omics data along with biomedical images has gaining a huge interest in predicting phenotypic conditions towards personalized medicine. Multiple layers of investigations such as genomics, transcriptomics and proteomics, have led to high dimensionality and heterogeneity of data. Multi-omics data integration can provide meaningful contribution to early diagnosis and an accurate estimate of prognosis and treatment in cancer. Some multi-layer data structures have been developed to integrate multi-omics biological information, but none of these has been developed and evaluated to include radiomic data. We proposed to use MultiAssayExperiment (MAE) as an integrated data structure to combine multi-omics data facilitating the exploration of heterogeneous data. We improved the usability of the MAE, developing a Multi-omics Statistical Approaches (MuSA) tool that uses a Shiny graphical user interface, able to simplify the management and the analysis of radiogenomic datasets. The capabilities of MuSA were shown using public breast cancer datasets from TCGA-TCIA databases. MuSA architecture is modular and can be divided in Pre-processing and Downstream analysis. The pre-processing section allows data filtering and normalization. The downstream analysis section contains modules for data science such as correlation, clustering (i.e., heatmap) and feature selection methods. The results are dynamically shown in MuSA. MuSA tool provides an easy-to-use way to create, manage and analyze radiogenomic data. The application is specifically designed to guide no-programmer researchers through different computational steps. Integration analysis is implemented in a modular structure, making MuSA an easily expansible open-source software.


Author(s):  
Sebastijan Dumancic ◽  
Hendrik Blockeel

The goal of unsupervised representation learning is to extract a new representation of data, such that solving many different tasks becomes easier. Existing methods typically focus on vectorized data and offer little support for relational data, which additionally describes relationships among instances. In this work we introduce an approach for relational unsupervised representation learning. Viewing a relational dataset as a hypergraph, new features are obtained by clustering vertices and hyperedges. To find a representation suited for many relational learning tasks, a wide range of similarities between relational objects is considered, e.g. feature and structural similarities. We experimentally evaluate the proposed approach and show that models learned on such latent representations perform better, have lower complexity, and outperform the existing approaches on classification tasks.


2021 ◽  
Vol 8 ◽  
Author(s):  
Maria-Theresia Verwega ◽  
Carola Trahms ◽  
Avan N. Antia ◽  
Thorsten Dickhaus ◽  
Enno Prigge ◽  
...  

Earth System Sciences have been generating increasingly larger amounts of heterogeneous data in recent years. We identify the need to combine Earth System Sciences with Data Sciences, and give our perspective on how this could be accomplished within the sub-field of Marine Sciences. Marine data hold abundant information and insights that Data Science techniques can reveal. There is high demand and potential to combine skills and knowledge from Marine and Data Sciences to best take advantage of the vast amount of marine data. This can be accomplished by establishing Marine Data Science as a new research discipline. Marine Data Science is an interface science that applies Data Science tools to extract information, knowledge, and insights from the exponentially increasing body of marine data. Marine Data Scientists need to be trained Data Scientists with a broad basic understanding of Marine Sciences and expertise in knowledge transfer. Marine Data Science doctoral researchers need targeted training for these specific skills, a crucial component of which is co-supervision from both parental sciences. They also might face challenges of scientific recognition and lack of an established academic career path. In this paper, we, Marine and Data Scientists at different stages of their academic career, present perspectives to define Marine Data Science as a distinct discipline. We draw on experiences of a Doctoral Research School, MarDATA, dedicated to training a cohort of early career Marine Data Scientists. We characterize the methods of Marine Data Science as a toolbox including skills from their two parental sciences. All of these aim to analyze and interpret marine data, which build the foundation of Marine Data Science.


2007 ◽  
Vol 1 (2) ◽  
pp. 105
Author(s):  
Endang Fauziati

Article basically tries to explore the concept of individualized learning applicable in teaching learning process which can enhance learners’ autonomy and provides a brief practical guidance on how to put this concept into classroom practices. There are at least five underlying assumptions of learning based on this concept, namely: different learning styles, a variety of sources, teacher as facilitator, integrated learning tasks, and different learning goals. It can be concluded that classroom practices designed based on these concepts can improve learners’ autonomy, such as grouping, projects or tasks, and discussion.


2020 ◽  
Vol 14 (4) ◽  
pp. 534-546
Author(s):  
Tianyu Li ◽  
Matthew Butrovich ◽  
Amadou Ngom ◽  
Wan Shen Lim ◽  
Wes McKinney ◽  
...  

The proliferation of modern data processing tools has given rise to open-source columnar data formats. These formats help organizations avoid repeated conversion of data to a new format for each application. However, these formats are read-only, and organizations must use a heavy-weight transformation process to load data from on-line transactional processing (OLTP) systems. As a result, DBMSs often fail to take advantage of full network bandwidth when transferring data. We aim to reduce or even eliminate this overhead by developing a storage architecture for in-memory database management systems (DBMSs) that is aware of the eventual usage of its data and emits columnar storage blocks in a universal open-source format. We introduce relaxations to common analytical data formats to efficiently update records and rely on a lightweight transformation process to convert blocks to a read-optimized layout when they are cold. We also describe how to access data from third-party analytical tools with minimal serialization overhead. We implemented our storage engine based on the Apache Arrow format and integrated it into the NoisePage DBMS to evaluate our work. Our experiments show that our approach achieves comparable performance with dedicated OLTP DBMSs while enabling orders-of-magnitude faster data exports to external data science and machine learning tools than existing methods.


2020 ◽  
Author(s):  
Jörn Lötsch ◽  
Alfred Ultsch

Abstract Calculating the magnitude of treatment effects or of differences between two groups is a common task in quantitative science. Standard effect size measures based on differences, such as the commonly used Cohen's, fail to capture the treatment-related effects on the data if the effects were not reflected by the central tendency. "Impact” is a novel nonparametric measure of effect size obtained as the sum of two separate components and includes (i) the change in the central tendency of the group-specific data, normalized to the overall variability, and (ii) the difference in the probability density of the group-specific data. Results obtained on artificial data and empirical biomedical data showed that impact outperforms Cohen's d by this additional component. It is shown that in a multivariate setting, while standard statistical analyses and Cohen’s d are not able to identify effects that lead to changes in the form of data distribution, “Impact” correctly captures them. The proposed effect size measure shares the ability to observe such an effect with machine learning algorithms. It is numerically stable even for degenerate distributions consisting of singular values. Therefore, the proposed effect size measure is particularly well suited for data science and artificial intelligence-based knowledge discovery from (big) and heterogeneous data.


Author(s):  
Carla Quinci

This chapter provides an outline of the main issues concerning the conceptualisation and modelling of Translation Competence (TC) and proposes the adoption of a product-based definition for didactic purposes. Such definition is based on an empirical longitudinal product-oriented study on TC aiming to identify possible textual features and conventions that can be related to the translator's level of competence. The preliminary results from this longitudinal study presented in this chapter appear to suggest the existence of a possible relation between specific textual and procedural patterns and the participants' translation experience. Such patterns could provide translator trainers and trainees with a set of pragmatic indications for the definition and achievement of specific learning goals and could potentially serve as predictive developmental hypotheses in translator training.


Sign in / Sign up

Export Citation Format

Share Document