DEGENERATIVE DISC SEGMENTATION AND DIAGNOSIS TECHNOLOGY USING IMPORTANT FEATURES FROM MRI OF SPINE IN IMAGES

2014 ◽  
Vol 26 (04) ◽  
pp. 1440008
Author(s):  
Ming-Chi Wu ◽  
Yu-Liang Kuo ◽  
Chen-Wei Chen ◽  
Cheng-An Fang ◽  
Chiun-Li Chin ◽  
...  

In this paper, we focus on the medical imaging segmentation techniques which are used in the study of spine diseases. In the medical reports, it is shown that common people worry more about the spine diseases caused by the disc degeneration. Because of the complex composition of the spine, which includes the spine bones, cartilage, fat, water and soft tissue, it is hard to correctly and easily find out the position of each cartilage in the spine images. This above problem always causes over-segmentation or unability to extract the cartilages. Thus, we propose an accurate and automated method to detect the abnormal disc. We combine two standard models with the threshold value to accurately identify the cartilage. Among the processing, we also solve the noising problems of spine image through morphological methods, removing the noncartilage areas using our proposed method, and find out the average height of the cartilages. Therefore, we can easily determine whether the disc is degenerated or not. In the experimental result, the segmentation accuracy of the extracted region by the proposed approach is evaluated by two criterions. The first criterion is statistical evaluation indices of image segmentation. It is evaluated by professional physician's manual segmentation, and the results show that our proposed method is easily implemented and has high accuracy, with the highest rate reaching 99.88%. The second criterion is a comparison evaluation index evaluated by our proposed system and other existence system. From this result, we know that our proposed system is better than other existence system.

Author(s):  
Qiu-Xia Hu ◽  
Jie Tian ◽  
Dong-Jian He

In order to improve the segmentation accuracy of plant lesion images, multi-channels segmentation algorithm of plant disease image was proposed based on linear discriminant analysis (LDA) method’s mapping and K-means’ clustering. Firstly, six color channels from RGB model and HSV model were obtained, and six channels of all pixels were laid out to six columns. Then one of these channels was regarded as label and the others were regarded as sample features. These data were grouped for linear discrimination analysis, and the mapping values of the other five channels were applied to the eigen vector space according to the first three big eigen values. Secondly, the mapping value was used as the input data for K-means and the points with minimum and maximum pixel values were used as the initial cluster center, which overcame the randomness for selecting the initial cluster center in K-means. And the segmented pixels were changed into background and foreground, so that the proposed segmentation method became the clustering of two classes for background and foreground. Finally, the experimental result showed that the segmentation effect of the proposed LDA mapping-based method is better than those of K-means, ExR and CIVE methods.


2019 ◽  
Vol 12 (1) ◽  
pp. 5-10 ◽  
Author(s):  
Sivagnanam Rajamanickam Mani Sekhar ◽  
Siddesh Gaddadevara Matt ◽  
Sunilkumar S. Manvi ◽  
Srinivasa Krishnarajanagar Gopalalyengar

Background: Essential proteins are significant for drug design, cell development, and for living organism survival. A different method has been developed to predict essential proteins by using topological feature, and biological features. Objective: Still it is a challenging task to predict essential proteins effectively and timely, as the availability of protein protein interaction data depends on network correctness. Methods: In the proposed solution, two approaches Mean Weighted Average and Recursive Feature Elimination is been used to predict essential proteins and compared to select the best one. In Mean Weighted Average consecutive slot data to be taken into aggregated count, to get the nearest value which considered as prescription for the best proteins for the slot, where as in Recursive Feature Elimination method whole data is spilt into different slots and essential protein for each slot is determined. Results: The result shows that the accuracy using Recursive Feature Elimination is at-least nine percentages superior when compared to Mean Weighted Average and Betweenness centrality. Conclusion: Essential proteins are made of genes which are essential for living being survival and drug design. Different approaches have been proposed to anticipate essential proteins using either experimental or computation methods. The experimental result show that the proposed work performs better than other approaches.


2021 ◽  
Vol 13 (12) ◽  
pp. 2259
Author(s):  
Ruicheng Zhang ◽  
Chengfa Gao ◽  
Qing Zhao ◽  
Zihan Peng ◽  
Rui Shang

A multipath is a major error source in bridge deformation monitoring and the key to achieving millimeter-level monitoring. Although the traditional MHM (multipath hemispherical map) algorithm can be applied to multipath mitigation in real-time scenarios, accuracy needs to be further improved due to the influence of observation noise and the multipath differences between different satellites. Aiming at the insufficiency of MHM in dealing with the adverse impact of observation noise, we proposed the MHM_V model, based on Variational Mode Decomposition (VMD) and the MHM algorithm. Utilizing the VMD algorithm to extract the multipath from single-difference (SD) residuals, and according to the principle of the closest elevation and azimuth, the original observation of carrier phase in the few days following the implementation are corrected to mitigate the influence of the multipath. The MHM_V model proposed in this paper is verified and compared with the traditional MHM algorithm by using the observed data of the Forth Road Bridge with a seven day and 10 s sampling rate. The results show that the correlation coefficient of the multipath on two adjacent days was increased by about 10% after residual denoising with the VMD algorithm; the standard deviations of residual error in the L1/L2 frequencies were improved by 37.8% and 40.7%, respectively, which were better than the scores of 26.1% and 31.0% for the MHM algorithm. Taking a ratio equal to three as the threshold value, the fixed success rates of ambiguity were 88.0% without multipath mitigation and 99.4% after mitigating the multipath with MHM_V. The MHM_V algorithm can effectively improve the success rate, reliability, and convergence rate of ambiguity resolution in a bridge multipath environment and perform better than the MHM algorithm.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2503
Author(s):  
Taro Suzuki ◽  
Yoshiharu Amano

This paper proposes a method for detecting non-line-of-sight (NLOS) multipath, which causes large positioning errors in a global navigation satellite system (GNSS). We use GNSS signal correlation output, which is the most primitive GNSS signal processing output, to detect NLOS multipath based on machine learning. The shape of the multi-correlator outputs is distorted due to the NLOS multipath. The features of the shape of the multi-correlator are used to discriminate the NLOS multipath. We implement two supervised learning methods, a support vector machine (SVM) and a neural network (NN), and compare their performance. In addition, we also propose an automated method of collecting training data for LOS and NLOS signals of machine learning. The evaluation of the proposed NLOS detection method in an urban environment confirmed that NN was better than SVM, and 97.7% of NLOS signals were correctly discriminated.


2019 ◽  
Vol 28 (2) ◽  
pp. 275-289 ◽  
Author(s):  
S. Pramod Kumar ◽  
Mrityunjaya V. Latte

Abstract The traditional segmentation methods available for pulmonary parenchyma are not accurate because most of the methods exclude nodules or tumors adhering to the lung pleural wall as fat. In this paper, several techniques are exhaustively used in different phases, including two-dimensional (2D) optimal threshold selection and 2D reconstruction for lung parenchyma segmentation. Then, lung parenchyma boundaries are repaired using improved chain code and Bresenham pixel interconnection. The proposed method of segmentation and repairing is fully automated. Here, 21 thoracic computer tomography slices having juxtapleural nodules and 115 lung parenchyma scans are used to verify the robustness and accuracy of the proposed method. Results are compared with the most cited active contour methods. Empirical results show that the proposed fully automated method for segmenting lung parenchyma is more accurate. The proposed method is 100% sensitive to the inclusion of nodules/tumors adhering to the lung pleural wall, the juxtapleural nodule segmentation is >98%, and the lung parenchyma segmentation accuracy is >96%.


2021 ◽  
Vol 11 (12) ◽  
pp. 5646
Author(s):  
Cheng-Wei Hung ◽  
Ying-Kuan Tsai ◽  
Tai-An Chen ◽  
Hsin-Hung Lai ◽  
Pin-Wen Wu

This study used experimental and numerical simulation methods to discuss the attenuation mechanism of a blast inside a tunnel for different forms of a tunnel pressure reduction module under the condition of a tunnel near-field explosion. In terms of the experiment, a small-scale model was used for the explosion experiments of a tunnel pressure reduction module (expansion chamber, one-pressure relief orifice plate, double-pressure relief orifice plate). In the numerical simulation, the pressure transfer effect was evaluated using the ALE fluid–solid coupling and mapping technique. The findings showed that the pressure attenuation model changed the tunnel section to diffuse, reduce, or detour the pressure transfer, indicating the blast attenuation effect. In terms of the effect of blast attenuation, the double-pressure relief orifice plate was better than the one-pressure relief orifice plate, and the single-pressure relief orifice plate was better than the expansion chamber. The expansion chamber attenuated the blast by 30%, the one-pressure relief orifice plate attenuated the blast by 51%, and the double-pressure relief orifice plate attenuated the blast by 82%. The blast attenuation trend of the numerical simulation result generally matched that of the experimental result. The results of this study can provide a reference for future protective designs and reinforce the U.S. Force regulations.


2018 ◽  
Vol 14 (06) ◽  
pp. 191
Author(s):  
Chao Huang ◽  
Yuang Mao

<p class="0abstract"><span lang="EN-US">T</span><span lang="EN-US">o further study the basic principle and localization process of DV-Hop location algorithm, the location error reason of traditional location algorithm caused by the minimum hop number </span><span lang="EN-US">wa</span><span lang="EN-US">s analyzed and demonstrated in detail.</span><span lang="EN-US"> The RSSI ranging technology was introduced to modify the minimum hops stage, and the minimum hop number was improved by the DV-Hop algorithm. </span><span lang="EN-US">For the location error caused by the average hop distance, the hop distance of the original algorithm </span><span lang="EN-US">wa</span><span lang="EN-US">s optimized. The improved location algorithm of DV-Hop average hop distance </span><span lang="EN-US">wa</span><span lang="EN-US">s used to modify the average range calculation by introducing the proportion of beacon nodes and the optimal threshold value. The optimization algorithm of the two different stages </span><span lang="EN-US">wa</span><span lang="EN-US">s combined into an improved location algorithm based on hop distance optimization, and the advantages of the two algorithms </span><span lang="EN-US">we</span><span lang="EN-US">re taken into account.</span><span lang="EN-US">Finally, the traditional DV-Hop location algorithm and the three improved location algorithms </span><span lang="EN-US">we</span><span lang="EN-US">re simulated and analyzed by beacon node ratio and node communication radius with multi angle. The experimental results show</span><span lang="EN-US">ed</span><span lang="EN-US"> that the improved algorithm </span><span lang="EN-US">wa</span><span lang="EN-US">s better than the original algorithm in the positioning stability and positioning accuracy.</span></p>


Author(s):  
Luis Angel García-Gonzales ◽  
Teresa Angélica Evaristo-Chiyong

Objective: To evaluate the esthetic perception of the smile according to the variation of the vertical position and the angulation of the upper central incisor (UPI) by dental students (DS) and common people (CP) of three regions of Peru. Materials and Methods: Descriptive cross-sectional design. The sample was 462 adults, divided into 77 for each subgroup of DS and CP of the Lima (Coast), Junín (Highlands) and Loreto (Rainforest) regions. Using the Photoshop® software, a photograph of a woman's smile was modified by varying the vertical position and angulation of the UPI, obtaining images that were evaluated using the analog visual scale. Results: The CP rated better than DS in most categories (p <0.001). The smile best valued by DS in a vertical position was 1mm and 2mm; and for angulation 0°, while for CP 1mm and 0° respectively (p <0.05). Comparing by regions, the DS of Lima gave the lowest rating for 0 ° with 52.63 and those of Junín the highest for 4° with 45.90. The PC of Loreto registered the lowest score for the vertical position categories (p <0.001), while for angulation in Junín it was rated with a lower score than in Loreto for -6 ° and 0 °; and for 4° Lima provided the lowest rating (p <0.001). Conclusions: The esthetic perception of the smile is affected by the variation of vertical position and angulation of the ICS in common people and dental students in the three regions of Peru.


Author(s):  
Guorui Sheng ◽  
Tiegang Gao

Seam-Carving is widely used for content-aware image resizing. To cope with the digital image forgery caused by Seam-Carving, a new detecting algorithm based on Benford's law is presented. The algorithm utilize the probabilities of the first digits of quantized DCT coefficients from individual AC modes to detect Seam-Carving images. The experimental result shows that the performance of proposed method is better than that of the method based on traditional Markov features and other existing methods.


2020 ◽  
Vol 32 ◽  
pp. 03006
Author(s):  
D. Suneetha ◽  
D. Rathna Kishore ◽  
P. Narendra Babu

Data Compression in Cryptography is one of the interesting research topic. The compression process reduces the amount of transferring data as well as storage space which in turn effects the usage of bandwidth. Further, when a plain text is converted to cipher text, the length of the cipher text becomes large. This adds up to tremendous information storing. It is extremely important to address the storage capacity issue along with the security issues of exponentially developing information. This problem can be resolved by compressing the ciphertext based on a some compression algorithm. In this proposed work used the compression technique called palindrome compression technique. The compression ratio of the proposed method is better than the standard method for both colored and gray scaled images. An experimental result for the proposed methods is better than existing methods for different types of image.


Sign in / Sign up

Export Citation Format

Share Document