scholarly journals Analysis of Enterprise Financial and Economic Impact Based on Background Deep Learning Model under Business Administration

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jingxiao Hu

Enterprise finance has become an indispensable financial channel for people to invest in their lives, and business management can provide a better economic environment for the development of enterprise finance. The structure of enterprises is gradually becoming more and more complex, and business administration shoulders considerable responsibilities and obligations in the organization and supervision of today’s social management structure. How can China play its functions under the new situation after the world economic exchanges are more frequent is an important link to promote the stable development of financial markets. In view of the problems of economic activity behavior and certainty of financial index system under the background of existing business administration, this paper puts forward the deep learning model to make risk analysis, income analysis, profit and loss analysis, and so on. The formula of deep learning model is used to calculate the data graph of financial economy, and finally, various data are compared to get the research of several business management methods on the development of enterprise financial economy. Among them, the model of current management mode belongs to two modes: e-commerce and EPR management. They not only have very unique management characteristics but also greatly promote the development of modern management, and their roles also well interpret the characteristics of modern management. The experiment also analyzes the financial data under the four algorithms for uncertainty comparison, profit and loss comparison, discreteness comparison, volatility comparison, and possibility analysis. Finally, after the source of uncertainty, the risk prediction and risk management are carried out by constructing decision trees, and these structural models are used to bring comprehensive analysis to the financial economy of enterprises and to build the impact of good trends and development prospects.

2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Renzhou Gui ◽  
Tongjie Chen ◽  
Han Nie

With the continuous development of science, more and more research results have proved that machine learning is capable of diagnosing and studying the major depressive disorder (MDD) in the brain. We propose a deep learning network with multibranch and local residual feedback, for four different types of functional magnetic resonance imaging (fMRI) data produced by depressed patients and control people under the condition of listening to positive- and negative-emotions music. We use the large convolution kernel of the same size as the correlation matrix to match the features and obtain the results of feature matching of 264 regions of interest (ROIs). Firstly, four-dimensional fMRI data are used to generate the two-dimensional correlation matrix of one person’s brain based on ROIs and then processed by the threshold value which is selected according to the characteristics of complex network and small-world network. After that, the deep learning model in this paper is compared with support vector machine (SVM), logistic regression (LR), k-nearest neighbor (kNN), a common deep neural network (DNN), and a deep convolutional neural network (CNN) for classification. Finally, we further calculate the matched ROIs from the intermediate results of our deep learning model which can help related fields further explore the pathogeny of depression patients.


2021 ◽  
pp. 1-17
Author(s):  
Kun Zhu ◽  
Shuai Zhang ◽  
Wenyu Zhang ◽  
Zhiqiang Zhang

Accurate taxi demand forecasting is significant to estimate the change of demand to further make informed decisions. Although deep learning methods have been widely applied for taxi demand forecasting, they neglect the complexity of taxi demand data and the impact of event occurrences, making it hard to effectively model the taxi demand in highly dynamic areas (e.g., areas with frequent event occurrences). Therefore, to achieve accurate and stable taxi demand forecasting in highly dynamic areas, a novel hybrid deep learning model is proposed in this study. First, to reduce the complexity of taxi demand time series, the seasonal-trend decomposition procedures based on loess is employed to decompose the time series into three simpler components (i.e., seasonal, trend, and remainder components). Then, different forecasting methods are adopted to handle different components to obtain robust forecasting results. Moreover, considering the instability and nonlinearity of the remainder component, this study proposed to fuse the event features (in particular, text data) to capture the unusual fluctuation patterns of remainder component and solve its extreme value problem. Finally, genetic algorithm is applied to determine the optimal weights for integrating the forecasting results of three components to obtain the final taxi demand. The experimental results demonstrate the better accuracy and reliability of the proposed model compared with other baseline forecasting models.


2021 ◽  
Vol 310 ◽  
pp. 04002
Author(s):  
Nguyen Thanh Doan

Nowaday, expanding the application of deep learning technology is attracting attention of many researchers in the field of remote sensing. This paper presents methodology of using deep convolutional neural network model to determine the position of shoreline on Sentinel 2 satellite image. The methodology also provides techniques to reduce model retraining while ensuring the accuracy of the results. Methodological evaluation and analysis were conducted in the Mekong Delta region. The results from the study showed that interpolating the input images and calibrating the result thresholds improve accuracy and allow the trained deep learning model to externally test different images. The paper also evaluates the impact of the training dataset on the quality of the results obtained. Suggestions are also given for the number of files in the training dataset, as well as the information used for model training to solve the shoreline detection problem.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 1556-1556
Author(s):  
Alexander S. Rich ◽  
Barry Leybovich ◽  
Melissa Estevez ◽  
Jamie Irvine ◽  
Nisha Singh ◽  
...  

1556 Background: Identifying patients with a particular cancer and determining the date of that diagnosis from EHR data is important for selecting real world research cohorts and conducting downstream analyses. However, cancer diagnoses and their dates are often not accurately recorded in the EHR in a structured form. We developed a unified deep learning model for identifying patients with NSCLC and their initial and advanced diagnosis date(s). Methods: The study used a cohort of 52,834 patients with lung cancer ICD codes from the nationwide deidentified Flatiron Health EHR-derived database. For all patients in the cohort, abstractors used an in-house technology-enabled platform to identify an NSCLC diagnosis, advanced disease, and relevant diagnosis date(s) via chart review. Advanced NSCLC was defined as stage IIIB or IV disease at diagnosis or early stage disease that recurred or progressed. The deep learning model was trained on 38,517 patients, with a separate 14,317 patient test cohort. The model input was a set of sentences containing keywords related to (a)NSCLC, extracted from a patient’s EHR documents. Each sentence was associated with a date, using the document timestamp or, if present, a date mentioned explicitly in the sentence. The sentences were processed by a GRU network, followed by an attentional network that integrated across sentences, outputting a prediction of whether the patient had been diagnosed with (a)NSCLC and the diagnosis date(s) if so. We measured sensitivity and positive predictive value (PPV) of extracting the presence of initial and advanced diagnoses in the test cohort. Among patients with both model-extracted and abstracted diagnosis dates, we also measured 30-day accuracy, defined as the proportion of patients where the dates match to within 30 days. Real world overall survival (rwOS) for patients abstracted vs. model-extracted as advanced was calculated using Kaplan-Meier methods (index date: abstracted vs. model-extracted advanced diagnosis date). Results: Results in the Table show the sensitivity, PPV, and accuracy of the model extracted diagnoses and dates. RwOS was similar using model extracted aNSCLC diagnosis dates (median = 13.7) versus abstracted diagnosis dates (median = 13.3), with a difference of 0.4 months (95% CI = [0.0, 0.8]). Conclusions: Initial and advanced diagnosis of NSCLC and dates of diagnosis can be accurately extracted from unstructured clinical text using a deep learning algorithm. This can further enable the use of EHR data for research on real-world treatment patterns and outcomes analysis, and other applications such as clinical trials matching. Future work should aim to understand the impact of model errors on downstream analyses.[Table: see text]


2021 ◽  
Vol 14 (1) ◽  
pp. 8
Author(s):  
Ihar Volkau ◽  
Abdul Mujeeb ◽  
Wenting Dai ◽  
Marius Erdt ◽  
Alexei Sourin

Deep learning provides new ways for defect detection in automatic optical inspections (AOI). However, the existing deep learning methods require thousands of images of defects to be used for training the algorithms. It limits the usability of these approaches in manufacturing, due to lack of images of defects before the actual manufacturing starts. In contrast, we propose to train a defect detection unsupervised deep learning model, using a much smaller number of images without defects. We propose an unsupervised deep learning model, based on transfer learning, that extracts typical semantic patterns from defect-free samples (one-class training). The model is built upon a pre-trained VGG16 model. It is further trained on custom datasets with different sizes of possible defects (printed circuit boards and soldered joints) using only small number of normal samples. We have found that the defect detection can be performed very well on a smooth background; however, in cases where the defect manifests as a change of texture, the detection can be less accurate. The proposed study uses deep learning self-supervised approach to identify if the sample under analysis contains any deviations (with types not defined in advance) from normal design. The method would improve the robustness of the AOI process to detect defects.


Author(s):  
Sanaa Elyassami ◽  
Achraf Ait Kaddour

<span lang="EN-US">Cardiovascular diseases remain the leading cause of death, taking an estimated 17.9 million lives each year and representing 31% of all global deaths. The patient records including blood reports, cardiac echo reports, and physician’s notes can be used to perform feature analysis and to accurately classify heart disease patients. In this paper, an incremental deep learning model was developed and trained with stochastic gradient descent using feedforward neural networks. The chi-square test and the dropout regularization have been incorporated into the model to improve the generalization capabilities and the performance of the heart disease patients' classification model. The impact of the learning rate and the depth of neural networks on the performance were explored. The hyperbolic tangent, the rectifier linear unit, the Maxout, and the exponential rectifier linear unit were used as activation functions for the hidden and the output layer neurons. To avoid over-optimistic results, the performance of the proposed model was evaluated using balanced accuracy and the overall predictive value in addition to the accuracy, sensitivity, and specificity. The obtained results are promising, and the proposed model can be applied to a larger dataset and used by physicians to accurately classify heart disease patients.</span>


2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Wanli Luo ◽  
Lei Zhang

The Internet of Things applications are diverse in nature, and a key aspect of it is multimedia sensors and devices. These IoT multimedia devices form the Internet of Multimedia Things (IoMT). Compared with the Internet of Things, it generates a large amount of text data with different characteristics and requirements. Aiming at the problems that machine learning and single structure deep learning model cannot effectively grasp the text emotional information in text processing, resulting in poor classification effect, this paper proposes a text classification method of tourism questions based on deep learning model. First, the corpus is trained with word2vec tool based on continuous word bag model to obtain the text word vector representation. Then, the attention mechanism is introduced into the long-short term network (LSTM), and the attention-based LSTM model is constructed for text feature extraction, which highlights the impact of different words in the input text on the text emotion category. Finally, the text features are input into the Softmax classifier to obtain the probability distribution of text categories, and the model is trained combined with the cross entropy loss function. The experimental results show that the average accuracy, recall, and F value are 0.943, 0.867, and 0.903, respectively, which has better classification effect than other methods.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


Sign in / Sign up

Export Citation Format

Share Document