scholarly journals Cognitive Compliance: Assessing Regulatory Risk in Financial Advice Documents

2020 ◽  
Vol 34 (09) ◽  
pp. 13636-13637
Author(s):  
Wanita Sherchan ◽  
Sue Ann Chen ◽  
Simon Harris ◽  
Nebula Alam ◽  
Khoi-Nguyen Tran ◽  
...  

This paper describes Cognitive Compliance - a solution that automates the complex manual process of assessing regulatory compliance of personal financial advice. The solution uses natural language processing (NLP), machine learning and deep learning to characterise the regulatory risk status of personal financial advice documents with traffic light rating for various risk factors. This enables comprehensive coverage of the review and rapid identification of documents at high risk of non-compliance with government regulations.

Author(s):  
Sumit Kaur

Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification. 


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


Author(s):  
Janjanam Prabhudas ◽  
C. H. Pradeep Reddy

The enormous increase of information along with the computational abilities of machines created innovative applications in natural language processing by invoking machine learning models. This chapter will project the trends of natural language processing by employing machine learning and its models in the context of text summarization. This chapter is organized to make the researcher understand technical perspectives regarding feature representation and their models to consider before applying on language-oriented tasks. Further, the present chapter revises the details of primary models of deep learning, its applications, and performance in the context of language processing. The primary focus of this chapter is to illustrate the technical research findings and gaps of text summarization based on deep learning along with state-of-the-art deep learning models for TS.


2019 ◽  
Vol 3 (Supplement_1) ◽  
pp. S480-S480
Author(s):  
Robert Lucero ◽  
Ragnhildur Bjarnadottir

Abstract Two hundred and fifty thousand older adults die annually in United States hospitals because of iatrogenic conditions (ICs). Clinicians, aging experts, patient advocates and federal policy makers agree that there is a need to enhance the safety of hospitalized older adults through improved identification and prevention of ICs. To this end, we are building a research program with the goal of enhancing the safety of hospitalized older adults by reducing ICs through an effective learning health system. Leveraging unique electronic data and healthcare system and human resources at the University of Florida, we are applying a state-of-the-art practice-based data science approach to identify risk factors of ICs (e.g., falls) from structured (i.e., nursing, clinical, administrative) and unstructured or text (i.e., registered nurse’s progress notes) data. Our interdisciplinary academic-clinical partnership includes scientific and clinical experts in patient safety, care quality, health outcomes, nursing and health informatics, natural language processing, data science, aging, standardized terminology, clinical decision support, statistics, machine learning, and hospital operations. Results to date have uncovered previously unknown fall risk factors within nursing (i.e., physical therapy initiation), clinical (i.e., number of fall risk increasing drugs, hemoglobin level), and administrative (i.e., Charlson Comorbidity Index, nurse skill mix, and registered nurse staffing ratio) structured data as well as patient cognitive, environmental, workflow, and communication factors in text data. The application of data science methods (i.e., machine learning and text-mining) and findings from this research will be used to develop text-mining pipelines to support sustained data-driven interdisciplinary aging studies to reduce ICs.


Author(s):  
Ninon Burgos ◽  
Simona Bottani ◽  
Johann Faouzi ◽  
Elina Thibeau-Sutre ◽  
Olivier Colliot

Abstract In order to reach precision medicine and improve patients’ quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Yan Wang ◽  
Hao Zhang ◽  
Zhanliang Sang ◽  
Lingwei Xu ◽  
Conghui Cao ◽  
...  

Automatic modulation recognition has successfully used various machine learning methods and achieved certain results. As a subarea of machine learning, deep learning has made great progress in recent years and has made remarkable progress in the field of image and language processing. Deep learning requires a large amount of data support. As a communication field with a large amount of data, there is an inherent advantage of applying deep learning. However, the extensive application of deep learning in the field of communication has not yet been fully developed, especially in underwater acoustic communication. In this paper, we mainly discuss the modulation recognition process which is an important part of communication process by using the deep learning method. Different from the common machine learning methods that require feature extraction, the deep learning method does not require feature extraction and obtains more effects than common machine learning.


2021 ◽  
Author(s):  
KOUSHIK DEB

Character Computing consists of not only personality trait recognition, but also correlation among these traits. Tons of research has been conducted in this area. Various factors like demographics, sentiment, gender, LIWC, and others have been taken into account in order to understand human personality. In this paper, we have concentrated on the factors that could be obtained from available data using Natural Language Processing. It has been observed that the most successful personality trait prediction models are highly dependent on NLP techniques. Researchers across the globe have used different kinds of machine learning and deep learning techniques to automate this process. Different combinations of factors lead the research in different directions. We have presented a comparative study among those experiments and tried to derive a direction for future development.


2020 ◽  
Author(s):  
Monalisha Ghosh ◽  
Goutam Sanyal

Abstract ­­­­­­­­­­­­­­­­­­­­­­­­­­­ Sentiment Analysis has recently been considered as the most active research field in the natural language processing (NLP) domain. Deep Learning is a subset of the large family of Machine Learning and becoming a growing trend due to its automatic learning capability with impressive consequences across different NLP tasks. Hence, a fusion-based Machine Learning framework has been attempted by merging the Traditional Machine Learning method with Deep Learning techniques to tackle the challenge of sentiment prediction for a massive amount of unstructured review dataset. The proposed architecture aims to utilize the Convolutional Neural Network (CNN) with a backpropagation algorithm to extract embedded feature vectors from the top hidden layer. Thereafter, these vectors augmented to an optimized feature set generated from binary particle swarm optimization (BPSO) method. Finally, a traditional SVM classifier is trained with these extended features set to determine the optimal hyper-plane for separating two classes of review datasets. The evaluation of this research work has been carried out on two benchmark movie review datasets IMDB, SST2. Experimental results with comparative studies based on performance accuracy and F-score value are reported to highlight the benefits of the developed frameworks.


Information ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 374
Author(s):  
Babacar Gaye ◽  
Dezheng Zhang ◽  
Aziguli Wulamu

With the extensive availability of social media platforms, Twitter has become a significant tool for the acquisition of peoples’ views, opinions, attitudes, and emotions towards certain entities. Within this frame of reference, sentiment analysis of tweets has become one of the most fascinating research areas in the field of natural language processing. A variety of techniques have been devised for sentiment analysis, but there is still room for improvement where the accuracy and efficacy of the system are concerned. This study proposes a novel approach that exploits the advantages of the lexical dictionary, machine learning, and deep learning classifiers. We classified the tweets based on the sentiments extracted by TextBlob using a stacked ensemble of three long short-term memory (LSTM) as base classifiers and logistic regression (LR) as a meta classifier. The proposed model proved to be effective and time-saving since it does not require feature extraction, as LSTM extracts features without any human intervention. We also compared our proposed approach with conventional machine learning models such as logistic regression, AdaBoost, and random forest. We also included state-of-the-art deep learning models in comparison with the proposed model. Experiments were conducted on the sentiment140 dataset and were evaluated in terms of accuracy, precision, recall, and F1 Score. Empirical results showed that our proposed approach manifested state-of-the-art results by achieving an accuracy score of 99%.


2021 ◽  
Author(s):  
Quincy A Hathaway ◽  
Naveena Yanamala ◽  
Matthew J Budoff ◽  
Partho P Sengupta ◽  
Irfan Zeb

Background: There is growing interest in utilizing machine learning techniques for routine atherosclerotic cardiovascular disease (ASCVD) risk prediction. We investigated whether novel deep learning survival models can augment ASCVD risk prediction over existing statistical and machine learning approaches. Methods: 6,814 participants from the Multi-Ethnic Study of Atherosclerosis (MESA) were followed over 16 years to assess incidence of all-cause mortality (mortality) or a composite of major adverse events (MAE). Features were evaluated within the categories of traditional risk factors, inflammatory biomarkers, and imaging markers. Data was split into an internal training/testing (four centers) and external validation (two centers). Both machine learning (COXPH, RSF, and lSVM) and deep learning (nMTLR and DeepSurv) models were evaluated. Results: In comparison to the COXPH model, DeepSurv significantly improved ASCVD risk prediction for MAE (AUC: 0.82 vs. 0.79, P≤0.001) and mortality (AUC: 0.86 vs. 0.80, P≤0.001) with traditional risk factors alone. Implementing non-categorical NRI, we noted a 65% increase in correct reclassification compared to the COXPH model for both MAE and mortality (P≤0.05). Assessing the relative risk of participants, DeepSurv was the only learning algorithm to develop a significantly improved risk score criteria, which outcompeted COXPH for both MAE (4.07 vs. 2.66, P≤0.001) and mortality (6.28 vs. 4.67, P=0.014). The addition of inflammatory or imaging biomarkers to traditional risk factors showed minimal/no significant improvement in model prediction. Conclusion: DeepSurv can leverage simple office-based clinical features alone to accurately predict ASCVD risk and cardiovascular outcomes, without the need for additional features, such as inflammatory and imaging biomarkers.


Sign in / Sign up

Export Citation Format

Share Document