scholarly journals The influence of machine learning-based knowledge management model on enterprise organizational capability innovation and industrial development

PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0242253
Author(s):  
Zhigang Zhou ◽  
Yanyan Liu ◽  
Hao Yu ◽  
Lihua Ren

The aims are to explore the construction of the knowledge management model for engineering cost consulting enterprises, and to expand the application of data mining techniques and machine learning methods in constructing knowledge management model. Through a questionnaire survey, the construction of the knowledge management model of construction-related enterprises and engineering cost consulting enterprises is discussed. First, through the analysis and discussion of ontology-based data mining (OBDM) algorithm and association analysis (Apriori) algorithm, a data mining algorithm (ML-AR algorithm) on account of ontology-based multilayer association and machine learning is proposed. The performance of the various algorithms is compared and analyzed. Second, based on the knowledge management level, analysis and statistics are conducted on the levels of knowledge acquisition, sharing, storage, and innovation. Finally, according to the foregoing, the knowledge management model based on engineering cost consulting enterprises is built and analyzed. The results show that the reliability coefficient of this questionnaire is above 0.8, and the average extracted value is above 0.7, verifying excellent reliability and validity. The efficiency of the ML-AR algorithm at both the number of transactions and the support level is better than the other two algorithms, which is expected to be applied to the enterprise knowledge management model. There is a positive correlation between each level of knowledge management; among them, the positive correlation between knowledge acquisition and knowledge sharing is the strongest. The enterprise knowledge management model has a positive impact on promoting organizational innovation capability and industrial development. The research work provides a direction for the development of enterprise knowledge management and the improvement of innovation ability.

2013 ◽  
Vol 411-414 ◽  
pp. 1099-1103
Author(s):  
Zhen Yu Yang ◽  
Jun Zhou

The knowledge sharing is one of the most important characteristics of knowledge management. In the traditional model of knowledge management, employees only select the sharing knowledge through independent action, and operating behavior between employees of the same type did not reflect reference. This paper is the integration of the recommendation algorithm of data mining and the traditional knowledge ontology knowledge management model, proposing the process enterprise knowledge management model based on the recommendation algorithm, and knowledge management framework of knowledge as the main body, the field of process-driven and recommendation process for the behavior. To recommend the appropriate knowledge for the staff improves the efficiency of enterprise employees staff to the knowledge and promote the application and innovation of knowledge of the enterprise.


Malware is a general problems faced in the present day. Malware is a file that may be on the client machine. Malware can root an uncorrectable risk to the safety and protection of personal workstation clients as an expansion in the spiteful threats. In this paper explain a malware threats detection using data mining and machine learning. Malware detection algorithms with machine learning approach and data file. Also explained break executable files, create instruction set and take a look at different machine learning and data mining algorithm for feature extraction, reduction for detection of malware. In the system precisely distinguishes both new and known malware occurrences even though the double distinction among malware and real software is ordinarily little. There is a demand to present a skeleton which can come across latest, malicious executable files.


2021 ◽  
Vol 6 (2) ◽  
pp. 167-174
Author(s):  
Abdul Latif ◽  
Lady Agustin Fitriana ◽  
Muhammad Rifqi Firdaus

Software development involves several interrelated factors that influence development efforts and productivity. Improving the estimation techniques available to project managers will facilitate more effective time and budget control in software development. Software Effort Estimation or software cost/effort estimation can help a software development company to overcome difficulties experienced in estimating software development efforts. This study aims to compare the Machine Learning method of Linear Regression (LR), Multilayer Perceptron (MLP), Radial Basis Function (RBF), and Decision Tree Random Forest (DTRF) to calculate estimated cost/effort software. Then these five approaches will be tested on a dataset of software development projects as many as 10 dataset projects. So that it can produce new knowledge about what machine learning and non-machine learning methods are the most accurate for estimating software business. As well as knowing between the selection between using Particle Swarm Optimization (PSO) for attributes selection and without PSO, which one can increase the accuracy for software business estimation. The data mining algorithm used to calculate the most optimal software effort estimate is the Linear Regression algorithm with an average RMSE value of 1603,024 for the 10 datasets tested. Then using the PSO feature selection can increase the accuracy or reduce the RMSE average value to 1552,999. The result indicates that, compared with the original regression linear model, the accuracy or error rate of software effort estimation has increased by 3.12% by applying PSO feature selection


2021 ◽  
Vol 16 (1) ◽  
pp. 1-19 ◽  
Author(s):  
Khodabakhsh Zabihi ◽  
Falk Huettmann ◽  
Brian Young

Native bark beetles (Coleoptera: Curculionidae: Scolytinae) are a multi-species complex that rank among the key disturbances of coniferous forests of western North America. Many landscape-level variables are known to influence beetle outbreaks, such as suitable climatic conditions, spatial arrangement of incipient populations, topography, abundance of mature host trees, and disturbance history that include former outbreaks and fire. We assembled the first open access data, which can be used in open source GIS platforms, for understanding the ecology of the bark beetle organism in Alaska. We used boosted classification and regression tree as a machine learning data mining algorithm to model-predict the relationship between 14 environmental variables, as model predictors, and 838 occurrence records of 68 bark beetle species compared to pseudo-absence locations across the state of Alaska. The model predictors include topography- and climate-related predictors as well as feature proximities and anthropogenic factors. We were able to model, predict, and map the multi-species bark beetle occurrences across the state of Alaska on a 1-km spatial resolution in addition to providing a good quality environmental dataset freely accessible for the public. About 16% of the mixed forest and 59% of evergreen forest are expected to be occupied by the bark beetles based on current climatic conditions and biophysical attributes of the landscape. The open access dataset that we prepared, and the machine learning modeling approach that we used, can provide a foundation for future research not only on scolytines but for other multi-species questions of concern, such as forest defoliators, and small and big game wildlife species worldwide.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

2019 ◽  
Vol 13 (1) ◽  
pp. 27-36
Author(s):  
Andreas Neubert

Due to the different characteristics of the piece goods (e.g. size and weight), they are transported in general cargo warehouses by manually-operated industrial trucks such as forklifts and pallet trucks. Since manual activities are susceptible to possible human error, errors occur in logistical processes in general cargo warehouses. This leads to incorrect loading, stacking and damage to storage equipment and general cargo. It would be possible to reduce costs arising from errors in logistical processes if these errors could be remedied in advance. This paper presents a monitoring procedure for logistical processes in manually-operated general cargo warehouses. This is where predictive analysis is applied. Seven steps are introduced with a view to integrating predictive analysis into the IT infrastructure of general cargo warehouses. These steps are described in detail. The CRISP4BigData model, the SVM data mining algorithm, the data mining tool R, the programming language C++ for the scoring in general cargo warehouses represent the results of this paper. After having created the system and installed it in general cargo warehouses, initial results obtained with this method over a certain time span will be compared with results obtained without this method through manual recording over the same period.


Sign in / Sign up

Export Citation Format

Share Document