scholarly journals Identifying Diseases and Diagnosis using Machine Learning

The method that is use to optimize the criterion efficiency that depend on the previous experience is known as machine learning. By using the statistics theory it creates the mathematical model, and its major work is to surmise from the examples gave. To take the data straightforwardly from the information the approach uses computational methods. For recognize and identify the disease correctly a pattern is very necessary in Diagnosis recognition of disease. for creating the different models machine learning is used, this model can use for prediction of output and this output is depend on the input that is related to the data which previously used. For curing any disease it is very important to identify and detect that disease. For classify the disease classification algorithms are used. It uses are many dimensionality reduction algorithms and classification algorithms. Without externally modified the computer can learn with the help of the machine learning. For taking the best fit from the observation set the hypothesis is selected. Multi-dimensional and high dimensional are used in machine learning. By using machine learning automatic and classy algorithms can build.

2014 ◽  
Vol 7 (14) ◽  
pp. 9
Author(s):  
Patrick Townsend Valencia

We performed a theoretical and experimental study to define the best way to model the finite element sandwich structure aft of a fiberglass boat less than 15 meters in length, using an isotropic linear mathematical model that fits anisotropic material conditions. This is done by defining the properties of the ship’s fiberglass resin structure, which is representative of the influence of the forces acting during the glide on the geometry of the entire vessel. Formulation of the Finite Elements Method is presented, which works on the mathematical model to define the limitations of the results obtained. Isotropic material adjustment is calculated using Halpin-Tsai laws, developing its mathematical formulation for restrictions of modulus data entered as the finite element program experimentally calculated for each of the sandwich materials. The best-fit mathematical presentation to the modulus of the composite tool justifies the calculation thereof. 


2019 ◽  
Vol 4 (1) ◽  
pp. 269-282
Author(s):  
L.Y. Levin ◽  
◽  
M.A. Semin ◽  
A.V. Bogomyagkov ◽  
O.S. Parshakov ◽  
...  

The paper presents general information about the software application “Frozen Wall ”, which was designed to simulate frozen wall formation around constructed vertical shafts. The main feature of the developed application is the possibility of calibrating the mathematical model for the best fit with the experimental temperature measurements by numerical solution of the inverse Stefan problem. In addition, it takes into account a number of technological processes that affect the state of the frozen wall. Based on calculations performed in the application, it is possible to develop technical measures aimed at ensuring the efficiency of mine shafts construction in difficult hydrogeological conditions.


1979 ◽  
Vol 21 (6) ◽  
pp. 389-396 ◽  
Author(s):  
G. T. S. Done

This paper is concerned with the problem of adjusting the mathematical model of a system such that the computed natural frequencies coincide with those measured experimentally. The particular system considered is a laboratory turbine-rotor model, modelled mathematically by 42 Timoshenko beam elements and lumped masses. Model adjustments are made by assuming, firstly, Young's modulus and the modulus of rigidity to be variable, a change from standard values representing overall stiffness deficiencies in the mathematical model. In this case, a best fit to the lowest six natural frequencies, as measured experimentally, is made. Secondly, stiffness diameters are assumed variable, thereby allowing for deficiencies in the model near discontinuous changes of section, and in this case, the lowest six natural frequencies are matched exactly, but an overall measure of the differences between the actual and the stiffness diameters is minimized. An analysis for the rates of change of natural frequency with the various stiffness properties (i.e. the sensitivities) is presented, and the results of the manipulation discussed.


2020 ◽  
Vol 13 (1) ◽  
pp. 148-151
Author(s):  
Kristóf Muhi ◽  
Zsolt Csaba Johanyák

AbstractIn most cases, a dataset obtained through observation, measurement, etc. cannot be directly used for the training of a machine learning based system due to the unavoidable existence of missing data, inconsistencies and high dimensional feature space. Additionally, the individual features can contain quite different data types and ranges. For this reason, a data preprocessing step is nearly always necessary before the data can be used. This paper gives a short review of the typical methods applicable in the preprocessing and dimensionality reduction of raw data.


Author(s):  
Zafar Usmanov ◽  
◽  
Abdunabi Kosimov ◽  

Using the example of a model collection of 10 texts in five languages (English, German, Spanish, Italian, and French) using Latin graphics, the article establishes the applicability of the γ-classifier for automatic recognition of the language of a work based on the frequency of 26 common Latin alphabetic letters. The mathematical model of the γ-classifier is represented as a triad. Its first component is a digital portrait (DP) of the text - the distribution of the frequency of alphabetic unigrams in the text; the second component is formulas for calculating the distances between the DP texts and the third is a machine learning algorithm that implements the hypothesis of “homogeneity” of works written in one language and “heterogeneity” of works written in different languages. The tuning of the algorithm using a table of paired distances between all products of the model collection consisted in determining an optimal value of the real parameter γ, for which the error of violation of the “homogeneity” hypothesis is minimized. The γ-classifier trained on the texts of the model collection showed a high, 100% accuracy in recognizing the languages of the works. For testing the classifier, an additional six random texts were selected, of which five were in the same languages as the texts of the model collection. By the method of the nearest (in terms of distance) neighbor, all new texts confirmed their homogeneity with the corresponding pairs of monolingual works. The sixth text in Romanian showed its heterogeneity in relation to all elements of the collection. At the same time, it showed closeness in minimum distances, first of all, to two texts in Spanish and then to two works in Italian.


2013 ◽  
Vol 347-350 ◽  
pp. 2447-2451
Author(s):  
Guo He Li ◽  
Xiang Yue ◽  
Wei Jiang Wu ◽  
Jiang Hui Zhao

In order to set up universal and non-linear map of variables, a full binary tree is constructed as mathematical model. Leaf nodes of the full binary tree are linear combination of input variables, and used as inputs of next nodes. On the basis of weighting two inputs by selector for inner node, the inputs are again linearly combined and used as output for next node. The inputs and outputs of all the inner nodes are constructed in turn as the same, and the output of root node is the output of mathematical model, implementing segment-linear approximation. With the means of machine learning of particle swarm optimization for data from some areas, all the coefficients of mathematical model are achieved for the special. The mathematical model is applied to seismic inversion to interpret stratum by seismic data, approving it very practical.


2013 ◽  
Vol 846-847 ◽  
pp. 1056-1059
Author(s):  
Peng Wu

This paper proposes a dimensionality reduction mathematical model based on feedback constraint for High-dimensional information. It uses feedback restriction technique to construct dimensionality reduction model for multidimensional product data. The data obtained is with high latitudes, where a large number of data are under components involved standardized restrictions. High-dimensional data participating in operation will increase the complexity of operation, and hence, we need to reduce its dimension. In this paper multi-constrained inverse regression model is adopted to reduce the dimension of cloud resource scheduling data in multi-constrained environments. Experimental results show that the proposed method increases the data coverage rate of high-dimensional data mining by 66%, and has great optimizing effect.


Author(s):  
Rohit A Nitnaware ◽  
Prof. Vijaya Kamble

In Disease Diagnosis, affirmation of models is so basic for perceiving the disease exactly. Machine learning is the field, which is used for building the models that can predict the yield relies upon the wellsprings of data, which are connected subject to the past data. Disease unmistakable verification is the most essential task for treating any disease. Classification computations are used for orchestrating the disease. There are a couple of classification computations and dimensionality decline counts used. Machine Learning empowers the PCs to learn without being changed remotely. By using the Classification Algorithm, a hypothesis can be looked over the course of action of decisions the best fits a game plan of recognition. Machine Learning is used for the high dimensional and the multi-dimensional data. Better and modified computations can be made using Machine Learning.


Sign in / Sign up

Export Citation Format

Share Document