scholarly journals Contactless Medical Equipment AI Big Data Risk Control and Quasi Thinking Iterative Planning

Author(s):  
zhu rongrong

Abstract Contactless medical equipment AI big data risk control and quasi thinking iterative planning,The tanh equilibrium state of heavy core clustering based on hierarchical fuzzy clustering system based on differential incremental equilibrium theory is adopted. Successfully control the parameter group of CT / MR machine internal data, big data AI mathematical model risk. The polar graph of high-dimensional heavy core clustering processing data is regular and scientific. Compared with the discrete characteristics of the polar graph of the original data. So as to correctly detect and control the dynamic change process of CT / MR in the whole life cycle. It provides help for the predictive maintenance of early pre inspection and orderly maintenance of the medical system. It also puts forward and designs the big data depth statistics of AI risk control medical equipment, and establishes the standardized model software. Scientifically evaluated the exposure time and heat capacity MHU% of CT tubes, as well as the internal law of MR (nuclear magnetic resonance ), and processed big data twice and three times in heavy nuclear clustering. After optimizing the algorithm, hundreds of thousands of nonlinear random vibrations are carried out in the operation and maintenance database every second, and at least 30 concurrent operations are formed, which greatly improves and shortens the operation time. Finally, after adding micro vibration quasi thinking iterative planning to the uncertain structure of AI operation, we can successfully obtain the scientific and correct results required by high-dimensional information and images. This kind of AI big data risk control improves the intelligent management ability of medical institutions, establishes the software for predictable maintenance of AI big data, which is cross platform and embedded into the web system.

2021 ◽  
Vol 26 (1) ◽  
pp. 67-77
Author(s):  
Siva Sankari Subbiah ◽  
Jayakumar Chinnappan

Now a day, all the organizations collecting huge volume of data without knowing its usefulness. The fast development of Internet helps the organizations to capture data in many different formats through Internet of Things (IoT), social media and from other disparate sources. The dimension of the dataset increases day by day at an extraordinary rate resulting in large scale dataset with high dimensionality. The present paper reviews the opportunities and challenges of feature selection for processing the high dimensional data with reduced complexity and improved accuracy. In the modern big data world the feature selection has a significance in reducing the dimensionality and overfitting of the learning process. Many feature selection methods have been proposed by researchers for obtaining more relevant features especially from the big datasets that helps to provide accurate learning results without degradation in performance. This paper discusses the importance of feature selection, basic feature selection approaches, centralized and distributed big data processing using Hadoop and Spark, challenges of feature selection and provides the summary of the related research work done by various researchers. As a result, the big data analysis with the feature selection improves the accuracy of the learning.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhiru Li ◽  
Wei Xu ◽  
Huibin Shi ◽  
Yuanyuan Zhang ◽  
Yan Yan

Considering the importance of energy in our lives and its impact on other critical infrastructures, this paper starts from the whole life cycle of big data and divides the security and privacy risk factors of energy big data into five stages: data collection, data transmission, data storage, data use, and data destruction. Integrating into the consideration of cloud environment, this paper fully analyzes the risk factors of each stage and establishes a risk assessment index system for the security and privacy of energy big data. According to the different degrees of risk impact, AHP method is used to give indexes weights, genetic algorithm is used to optimize the initial weights and thresholds of BP neural network, and then the optimized weights and thresholds are given to BP neural network, and the evaluation samples in the database are used to train it. Then, the trained model is used to evaluate a case to verify the applicability of the model.


2021 ◽  
Author(s):  
Ning Tao ◽  
Wang Jiayu ◽  
Han Yumeng

Abstract Background:In order to solve the problems of redundancy, unfairness, low satisfaction and high cost of emergency material allocation caused by unreasonable allocation effectively in the case of sudden disasters, and minimize the economic cost, punishment cost and maximizing the satisfaction rate of disaster victims, a 3-level network emergency material allocation mode based on big data is proposed in this paper.Methods:Taking the loss degree and the dynamic change of material demand in the disaster stricken areas as constraints, the demand forecasting, scheduling optimization, targeted allocation and disaster victims' satisfaction model based on emergency relief materials is constructed. The Sample Average Approximation method and improved NSGA-II algorithm are designed to solve the problem.Results:Compared with the results obtained by the improved NSGA-II, the value is significantly reduced. From the fairness evaluation results of the two model distribution schemes, the model obtained by the improved NSGA-II is more suitable for the distribution of emergency supplies with fair distribution requirements.Conclusions:It can be concluded that the 3-level network allocation mode and improved NSGA-II can solve emergency relief materials allocation based on big data effectively. The next step is to design scheduling model with all feasible medical supplies allocation route to improve the practicability of the model.


Sign in / Sign up

Export Citation Format

Share Document