scholarly journals Automated Measurement of Lumbar Lordosis on Radiographs Using Machine Learning and Computer Vision

2019 ◽  
Vol 10 (5) ◽  
pp. 611-618 ◽  
Author(s):  
Brian H. Cho ◽  
Deepak Kaji ◽  
Zoe B. Cheung ◽  
Ivan B. Ye ◽  
Ray Tang ◽  
...  

Study Design: Cross sectional database study. Objective: To develop a fully automated artificial intelligence and computer vision pipeline for assisted evaluation of lumbar lordosis. Methods: Lateral lumbar radiographs were used to develop a segmentation neural network (n = 629). After synthetic augmentation, 70% of these radiographs were used for network training, while the remaining 30% were used for hyperparameter optimization. A computer vision algorithm was deployed on the segmented radiographs to calculate lumbar lordosis angles. A test set of radiographs was used to evaluate the validity of the entire pipeline (n = 151). Results: The U-Net segmentation achieved a test dataset dice score of 0.821, an area under the receiver operating curve of 0.914, and an accuracy of 0.862. The computer vision algorithm identified the L1 and S1 vertebrae on 84.1% of the test set with an average speed of 0.14 seconds/radiograph. From the 151 test set radiographs, 50 were randomly chosen for surgeon measurement. When compared with those measurements, our algorithm achieved a mean absolute error of 8.055° and a median absolute error of 6.965° (not statistically significant, P > .05). Conclusion: This study is the first to use artificial intelligence and computer vision in a combined pipeline to rapidly measure a sagittal spinopelvic parameter without prior manual surgeon input. The pipeline measures angles with no statistically significant differences from manual measurements by surgeons. This pipeline offers clinical utility in an assistive capacity, and future work should focus on improving segmentation network performance.

2020 ◽  
Vol 96 (3s) ◽  
pp. 585-588
Author(s):  
С.Е. Фролова ◽  
Е.С. Янакова

Предлагаются методы построения платформ прототипирования высокопроизводительных систем на кристалле для задач искусственного интеллекта. Изложены требования к платформам подобного класса и принципы изменения проекта СнК для имплементации в прототип. Рассматриваются методы отладки проектов на платформе прототипирования. Приведены результаты работ алгоритмов компьютерного зрения с использованием нейросетевых технологий на FPGA-прототипе семантических ядер ELcore. Methods have been proposed for building prototyping platforms for high-performance systems-on-chip for artificial intelligence tasks. The requirements for platforms of this class and the principles for changing the design of the SoC for implementation in the prototype have been described as well as methods of debugging projects on the prototyping platform. The results of the work of computer vision algorithms using neural network technologies on the FPGA prototype of the ELcore semantic cores have been presented.


Author(s):  
Concepción De‐Hita‐Cantalejo ◽  
Ángel García‐Pérez ◽  
José‐María Sánchez‐González ◽  
Raúl Capote‐Puente ◽  
María Carmen Sánchez‐González

BMJ Open ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. e046265
Author(s):  
Shotaro Doki ◽  
Shinichiro Sasahara ◽  
Daisuke Hori ◽  
Yuichi Oi ◽  
Tsukasa Takahashi ◽  
...  

ObjectivesPsychological distress is a worldwide problem and a serious problem that needs to be addressed in the field of occupational health. This study aimed to use artificial intelligence (AI) to predict psychological distress among workers using sociodemographic, lifestyle and sleep factors, not subjective information such as mood and emotion, and to examine the performance of the AI models through a comparison with psychiatrists.DesignCross-sectional study.SettingWe conducted a survey on psychological distress and living conditions among workers. An AI model for predicting psychological distress was created and then the results were compared in terms of accuracy with predictions made by psychiatrists.ParticipantsAn AI model of the neural network and six psychiatrists.Primary outcomeThe accuracies of the AI model and psychiatrists for predicting psychological distress.MethodsIn total, data from 7251 workers were analysed to predict moderate and severe psychological distress. An AI model of the neural network was created and accuracy, sensitivity and specificity were calculated. Six psychiatrists used the same data as the AI model to predict psychological distress and conduct a comparison with the AI model.ResultsThe accuracies of the AI model and psychiatrists for predicting moderate psychological distress were 65.2% and 64.4%, respectively, showing no significant difference. The accuracies of the AI model and psychiatrists for predicting severe psychological distress were 89.9% and 85.5%, respectively, indicating that the AI model had significantly higher accuracy.ConclusionsA machine learning model was successfully developed to screen workers with depressed mood. The explanatory variables used for the predictions did not directly ask about mood. Therefore, this newly developed model appears to be able to predict psychological distress among workers easily, regardless of their subjective views.


2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


Diagnostics ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 206
Author(s):  
Matteo Giulietti ◽  
Monia Cecati ◽  
Berina Sabanovic ◽  
Andrea Scirè ◽  
Alessia Cimadamore ◽  
...  

The increasing availability of molecular data provided by next-generation sequencing (NGS) techniques is allowing improvement in the possibilities of diagnosis and prognosis in renal cancer. Reliable and accurate predictors based on selected gene panels are urgently needed for better stratification of renal cell carcinoma (RCC) patients in order to define a personalized treatment plan. Artificial intelligence (AI) algorithms are currently in development for this purpose. Here, we reviewed studies that developed predictors based on AI algorithms for diagnosis and prognosis in renal cancer and we compared them with non-AI-based predictors. Comparing study results, it emerges that the AI prediction performance is good and slightly better than non-AI-based ones. However, there have been only minor improvements in AI predictors in terms of accuracy and the area under the receiver operating curve (AUC) over the last decade and the number of genes used had little influence on these indices. Furthermore, we highlight that different studies having the same goal obtain similar performance despite the fact they use different discriminating genes. This is surprising because genes related to the diagnosis or prognosis are expected to be tumor-specific and independent of selection methods and algorithms. The performance of these predictors will be better with the improvement in the learning methods, as the number of cases increases and by using different types of input data (e.g., non-coding RNAs, proteomic and metabolic). This will allow for more precise identification, classification and staging of cancerous lesions which will be less affected by interpathologist variability.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Andre Esteva ◽  
Katherine Chou ◽  
Serena Yeung ◽  
Nikhil Naik ◽  
Ali Madani ◽  
...  

AbstractA decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields—including medicine—to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques—powered by deep learning—for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit—including cardiology, pathology, dermatology, ophthalmology–and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.


Author(s):  
Hernan Chinsk ◽  
Ricardo Lerch ◽  
Damián Tournour ◽  
Luis Chinski ◽  
Diego Caruso

AbstractDuring rhinoplasty consultations, surgeons typically create a computer simulation of the expected result. An artificial intelligence model (AIM) can learn a surgeon's style and criteria and generate the simulation automatically. The objective of this study is to determine if an AIM is capable of imitating a surgeon's criteria to generate simulated images of an aesthetic rhinoplasty surgery. This is a cross-sectional survey study of resident and specialist doctors in otolaryngology conducted in the month of November 2019 during a rhinoplasty conference. Sequential images of rhinoplasty simulations created by a surgeon and by an AIM were shown at random. Participants used a seven-point Likert scale to evaluate their level of agreement with the simulation images they were shown, with 1 indicating total disagreement and 7 total agreement. Ninety-seven of 122 doctors agreed to participate in the survey. The median level of agreement between the participant and the surgeon was 6 (interquartile range or IQR 5–7); between the participant and the AIM it was 5 (IQR 4–6), p-value < 0.0001. The evaluators were in total or partial agreement with the results of the AIM's simulation 68.4% of the time (95% confidence interval or CI 64.9–71.7). They were in total or partial agreement with the surgeon's simulation 77.3% of the time (95% CI 74.2–80.3). An AIM can emulate a surgeon's aesthetic criteria to generate a computer-simulated image of rhinoplasty. This can allow patients to have a realistic approximation of the possible results of a rhinoplasty ahead of an in-person consultation. The level of evidence of the study is 4.


Sign in / Sign up

Export Citation Format

Share Document