scholarly journals Térdporc szegmentálása MR-felvételekből mesterséges intelligencia segítségével

2021 ◽  
Vol 162 (9) ◽  
pp. 352-360
Author(s):  
Péter Szoldán ◽  
Zsófia Egyed ◽  
Endre Szabó ◽  
János Somogyi ◽  
György Hangody ◽  
...  

Összefoglaló. Bevezetés: A térdízületnek ultrafriss osteochondralis allograft segítségével történő részleges ortopédiai rekonstrukciója képalkotó vizsgálatokon alapuló pontos tervezést igényel, mely folyamatban a morfológia felismerésére képes mesterséges intelligencia nagy segítséget jelenthet. Célkitűzés: Jelen kutatásunk célja a porc morfológiájának MR-felvételen történő felismerésére alkalmas mesterséges intelligencia kifejlesztése volt. Módszer: A feladatra legalkalmasabb MR-szekvencia meghatározása és 180 térd-MR-felvétel elkészítése után a mesterséges intelligencia tanításához manuálisan és félautomata szegmentálási módszerrel bejelölt porckontúrokkal tréninghalmazt hoztunk létre. A mély convolutiós neuralis hálózaton alapuló mesterséges intelligenciát ezekkel az adatokkal tanítottuk be. Eredmények: Munkánk eredménye, hogy a mesterséges intelligencia képes a meghatározott szekvenciájú MR-felvételen a porcnak a műtéti tervezéshez szükséges pontosságú bejelölésére, mely az első lépés a gép által végzett műtéti tervezés felé. Következtetés: A választott technológia – a mesterséges intelligencia – alkalmasnak tűnik a porc geometriájával kapcsolatos feladatok megoldására, ami széles körű alkalmazási lehetőséget teremt az ízületi terápiában. Orv Hetil. 2021; 162(9): 352–360. Summary. Introduction: The partial orthopedic reconstruction of the knee joint with an osteochondral allograft requires precise planning based on medical imaging reliant; an artificial intelligence capable of determining the morphology of the cartilage tissue can be of great help in such a planning. Objective: We aimed to develop and train an artificial intelligence capable of determining the cartilage morphology in a knee joint based on an MR image. Method: After having determined the most appropriate MR sequence to use for this project and having acquired 180 knee MR images, we created the training set for the artificial intelligence by manually and semi-automatically segmenting the contours of the cartilage in the images. We then trained the neural network with this dataset. Results: As a result of our work, the artificial intelligence is capable to determine the morphology of the cartilage tissue in the MR image to a level of accuracy that is sufficient for surgery planning, therefore we have made the first step towards machine-planned surgeries. Conclusion: The selected technology – artificial intelligence – seems capable of solving tasks related to cartilage geometry, creating a wide range of application opportunities in joint therapy. Orv Hetil. 2021; 162(9): 352–360.

Author(s):  
Siranush Sargsyan ◽  
Anna Hovakimyan

The study and application of neural networks is one of the main areas in the field of artificial intelligence. The effectiveness of the neural network depends significantly on both its architecture and the structure of the training set. This paper proposes a probabilistic approach to evaluate the effectiveness of the neural network if the images intersect in the receptor field. A theorem and its corollaries are proved, which are consistent with the results obtained by a different path for a perceptron-type neural network.


BMJ Open ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. e046265
Author(s):  
Shotaro Doki ◽  
Shinichiro Sasahara ◽  
Daisuke Hori ◽  
Yuichi Oi ◽  
Tsukasa Takahashi ◽  
...  

ObjectivesPsychological distress is a worldwide problem and a serious problem that needs to be addressed in the field of occupational health. This study aimed to use artificial intelligence (AI) to predict psychological distress among workers using sociodemographic, lifestyle and sleep factors, not subjective information such as mood and emotion, and to examine the performance of the AI models through a comparison with psychiatrists.DesignCross-sectional study.SettingWe conducted a survey on psychological distress and living conditions among workers. An AI model for predicting psychological distress was created and then the results were compared in terms of accuracy with predictions made by psychiatrists.ParticipantsAn AI model of the neural network and six psychiatrists.Primary outcomeThe accuracies of the AI model and psychiatrists for predicting psychological distress.MethodsIn total, data from 7251 workers were analysed to predict moderate and severe psychological distress. An AI model of the neural network was created and accuracy, sensitivity and specificity were calculated. Six psychiatrists used the same data as the AI model to predict psychological distress and conduct a comparison with the AI model.ResultsThe accuracies of the AI model and psychiatrists for predicting moderate psychological distress were 65.2% and 64.4%, respectively, showing no significant difference. The accuracies of the AI model and psychiatrists for predicting severe psychological distress were 89.9% and 85.5%, respectively, indicating that the AI model had significantly higher accuracy.ConclusionsA machine learning model was successfully developed to screen workers with depressed mood. The explanatory variables used for the predictions did not directly ask about mood. Therefore, this newly developed model appears to be able to predict psychological distress among workers easily, regardless of their subjective views.


2015 ◽  
Vol 770 ◽  
pp. 540-546 ◽  
Author(s):  
Yuri Eremenko ◽  
Dmitry Poleshchenko ◽  
Anton Glushchenko

The question about modern intelligent information processing methods usage for a ball mill filling level evaluation is considered. Vibration acceleration signal has been measured on a mill laboratory model for that purpose. It is made with accelerometer attached to a mill pin. The conclusion is made that mill filling level can not be measured with the help of such signal amplitude only. So this signal spectrum processed by a neural network is used. A training set for the neural network is formed with the help of spectral analysis methods. Trained neural network is able to find the correlation between mill pin vibration acceleration signal and mill filling level. Test set is formed from the data which is not included into the training set. This set is used in order to evaluate the network ability to evaluate the mill filling degree. The neural network guarantees no more than 7% error in the evaluation of mill filling level.


Author(s):  
Chenyu Zhou ◽  
Liangyao Yu ◽  
Yong Li ◽  
Jian Song

Accurate estimation of sideslip angle is essential for vehicle stability control. For commercial vehicles, the estimation of sideslip angle is challenging due to severe load transfer and tire nonlinearity. This paper presents a robust sideslip angle observer of commercial vehicles based on identification of tire cornering stiffness. Since tire cornering stiffness of commercial vehicles is greatly affected by tire force and road adhesion coefficient, it cannot be treated as a constant. To estimate the cornering stiffness in real time, the neural network model constructed by Levenberg-Marquardt backpropagation (LMBP) algorithm is employed. LMBP is a fast convergent supervised learning algorithm, which combines the steepest descent method and gauss-newton method, and is widely used in system parameter estimation. LMBP does not rely on the mathematical model of the actual system when building the neural network. Therefore, when the mathematical model is difficult to establish, LMBP can play a very good role. Considering the complexity of tire modeling, this study adopted LMBP algorithm to estimate tire cornering stiffness, which have simplified the tire model and improved the estimation accuracy. Combined with neural network, A time-varying Kalman filter (TVKF) is designed to observe the sideslip angle of commercial vehicles. To validate the feasibility of the proposed estimation algorithm, multiple driving maneuvers under different road surface friction have been carried out. The test results show that the proposed method has better accuracy than the existing algorithm, and it’s robust over a wide range of driving conditions.


The objective of this undertaking is to apply neural systems to phishing email recognition and assess the adequacy of this methodology. We structure the list of capabilities, process the phishing dataset, and execute the Neural Network frameworks. we analyze its exhibition against that of other real Artificial Intelligence Techniques – DT , K-nearest , NB and SVM machine.. The equivalent dataset and list of capabilities are utilized in the correlation. From the factual examination, we infer that Neural Networks with a proper number of concealed units can accomplish acceptable precision notwithstanding when the preparation models are rare. Additionally, our element determination is compelling in catching the qualities of phishing messages, as most AI calculations can yield sensible outcomes with it.


Author(s):  
Meghna Babubhai Patel ◽  
Jagruti N. Patel ◽  
Upasana M. Bhilota

ANN can work the way the human brain works and can learn the way we learn. The neural network is this kind of technology that is not an algorithm; it is a network that has weights on it, and you can adjust the weights so that it learns. You teach it through trials. It is a fact that the neural network can operate and improve its performance after “teaching” it, but it needs to undergo some process of learning to acquire information and be familiar with them. Nowadays, the age of smart devices dominates the technological world, and no one can deny their great value and contributions to mankind. A dramatic rise in the platforms, tools, and applications based on machine learning and artificial intelligence has been seen. These technologies not only impacted software and the internet industry but also other verticals such as healthcare, legal, manufacturing, automobile, and agriculture. The chapter shows the importance of latest technology used in ANN and future trends in ANN.


2019 ◽  
Vol 11 (19) ◽  
pp. 2191 ◽  
Author(s):  
Encarni Medina-Lopez ◽  
Leonardo Ureña-Fuentes

The aim of this work is to obtain high-resolution values of sea surface salinity (SSS) and temperature (SST) in the global ocean by using raw satellite data (i.e., without any band data pre-processing or atmospheric correction). Sentinel-2 Level 1-C Top of Atmosphere (TOA) reflectance data is used to obtain accurate SSS and SST information. A deep neural network is built to link the band information with in situ data from different buoys, vessels, drifters, and other platforms around the world. The neural network used in this paper includes shortcuts, providing an improved performance compared with the equivalent feed-forward architecture. The in situ information used as input for the network has been obtained from the Copernicus Marine In situ Service. Sentinel-2 platform-centred band data has been processed using Google Earth Engine in areas of 100 m × 100 m. Accurate salinity values are estimated for the first time independently of temperature. Salinity results rely only on direct satellite observations, although it presented a clear dependency on temperature ranges. Results show the neural network has good interpolation and extrapolation capabilities. Test results present correlation coefficients of 82 % and 84 % for salinity and temperature, respectively. The most common error for both SST and SSS is 0.4 ∘ C and 0 . 4 PSU. The sensitivity analysis shows that outliers are present in areas where the number of observations is very low. The network is finally applied over a complete Sentinel-2 tile, presenting sensible patterns for river-sea interaction, as well as seasonal variations. The methodology presented here is relevant for detailed coastal and oceanographic applications, reducing the time for data pre-processing, and it is applicable to a wide range of satellites, as the information is directly obtained from TOA data.


2008 ◽  
Vol 19 (02) ◽  
pp. 205-213 ◽  
Author(s):  
AMR RADI

Genetic Algorithm (GA) has been used to find the optimal neural network (NN) solution (i.e., hybrid technique) which represents dispersion formula of optical fiber. An efficient NN has been designed by GA to simulate the dynamics of the optical fiber system which is nonlinear. Without any knowledge about the system, we have used the input and output data to build a prediction model by NN. The neural network has been trained to produce a function that describes nonlinear system which studies the dependence of the refractive index of the fiber core on the wavelength and temperature. The trained NN model shows a good performance in matching the trained distributions. The NN is then used to predict refractive index that is not presented in the training set. The predicted refractive index had been matched to the experimental data effectively.


2019 ◽  
Vol 85 (6) ◽  
Author(s):  
L. Hesslow ◽  
L. Unnerfelt ◽  
O. Vallhagen ◽  
O. Embreus ◽  
M. Hoppe ◽  
...  

Integrated modelling of electron runaway requires computationally expensive kinetic models that are self-consistently coupled to the evolution of the background plasma parameters. The computational expense can be reduced by using parameterized runaway generation rates rather than solving the full kinetic problem. However, currently available generation rates neglect several important effects; in particular, they are not valid in the presence of partially ionized impurities. In this work, we construct a multilayer neural network for the Dreicer runaway generation rate which is trained on data obtained from kinetic simulations performed for a wide range of plasma parameters and impurities. The neural network accurately reproduces the Dreicer runaway generation rate obtained by the kinetic solver. By implementing it in a fluid runaway-electron modelling tool, we show that the improved generation rates lead to significant differences in the self-consistent runaway dynamics as compared to the results using the previously available formulas for the runaway generation rate.


2013 ◽  
Vol 641-642 ◽  
pp. 460-463
Author(s):  
Yong Gang Liu ◽  
Xin Tian ◽  
Yue Qiang Jiang ◽  
Gong Bing Li ◽  
Yi Zhou Li

In this study, a three-layer artificial neural network(ANN) model was constructed to predict the detonation pressure of aluminized explosive. Elemental composition and loading density were employed as input descriptors and detonation pressure was used as output. The dataset of 41 aluminized explosives was randomly divided into a training set (30) and a prediction set (11). After optimized by adjusting various parameters, the optimal condition of the neural network was obtained. Simulated with the final optimum neural network [6–9–1], calculated detonation pressures show good agreement with experimental results. It is shown here that ANN is able to produce accurate predictions of the detonation pressure of aluminized explosive.


Sign in / Sign up

Export Citation Format

Share Document