scholarly journals Recognition Rate Advancement and Data Error Improvement of Pathology Cutting with H-DenseUNet for Hepatocellular Carcinoma Image

Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1599
Author(s):  
Wen-Fan Chen ◽  
Hsin-You Ou ◽  
Cheng-Tang Pan ◽  
Chien-Chang Liao ◽  
Wen Huang ◽  
...  

Due to the fact that previous studies have rarely investigated the recognition rate discrepancy and pathology data error when applied to different databases, the purpose of this study is to investigate the improvement of recognition rate via deep learning-based liver lesion segmentation with the incorporation of hospital data. The recognition model used in this study is H-DenseUNet, which is applied to the segmentation of the liver and lesions, and a mixture of 2D/3D Hybrid-DenseUNet is used to reduce the recognition time and system memory requirements. Differences in recognition results were determined by comparing the training files of the standard LiTS competition data set with the training set after mixing in an additional 30 patients. The average error value of 9.6% was obtained by comparing the data discrepancy between the actual pathology data and the pathology data after the analysis of the identified images imported from Kaohsiung Chang Gung Memorial Hospital. The average error rate of the recognition output after mixing the LiTS database with hospital data for training was 1%. In the recognition part, the Dice coefficient was 0.52 after training 50 epochs using the standard LiTS database, while the Dice coefficient was increased to 0.61 after adding 30 hospital data to the training. After importing 3D Slice and ITK-Snap software, a 3D image of the lesion and liver segmentation can be developed. It is hoped that this method could be used to stimulate more research in addition to the general public standard database in the future, as well as to study the applicability of hospital data and improve the generality of the database.

2021 ◽  
pp. postgradmedj-2020-139361
Author(s):  
María Matesanz-Fernández ◽  
Teresa Seoane-Pillado ◽  
Iria Iñiguez-Vázquez ◽  
Roi Suárez-Gil ◽  
Sonia Pértega-Díaz ◽  
...  

ObjectiveWe aim to identify patterns of disease clusters among inpatients of a general hospital and to describe the characteristics and evolution of each group.MethodsWe used two data sets from the CMBD (Conjunto mínimo básico de datos - Minimum Basic Hospital Data Set (MBDS)) of the Lucus Augusti Hospital (Spain), hospitalisations and patients, realising a retrospective cohort study among the 74 220 patients discharged from the Medic Area between 01 January 2000 and 31 December 2015. We created multimorbidity clusters using multiple correspondence analysis.ResultsWe identified five clusters for both gender and age. Cluster 1: alcoholic liver disease, alcoholic dependency syndrome, lung and digestive tract malignant neoplasms (age under 50 years). Cluster 2: large intestine, prostate, breast and other malignant neoplasms, lymphoma and myeloma (age over 70, mostly males). Cluster 3: malnutrition, Parkinson disease and other mobility disorders, dementia and other mental health conditions (age over 80 years and mostly women). Cluster 4: atrial fibrillation/flutter, cardiac failure, chronic kidney failure and heart valve disease (age between 70–80 and mostly women). Cluster 5: hypertension/hypertensive heart disease, type 2 diabetes mellitus, ischaemic cardiomyopathy, dyslipidaemia, obesity and sleep apnea, including mostly men (age range 60–80). We assessed significant differences among the clusters when gender, age, number of chronic pathologies, number of rehospitalisations and mortality during the hospitalisation were assessed (p<0001 in all cases).ConclusionsWe identify for the first time in a hospital environment five clusters of disease combinations among the inpatients. These clusters contain several high-incidence diseases related to both age and gender that express their own evolution and clinical characteristics over time.


2020 ◽  
Vol 4 (1) ◽  
Author(s):  
Francesco Rizzetto ◽  
Francesca Calderoni ◽  
Cristina De Mattia ◽  
Arianna Defeudis ◽  
Valentina Giannini ◽  
...  

Abstract Background Radiomics is expected to improve the management of metastatic colorectal cancer (CRC). We aimed at evaluating the impact of liver lesion contouring as a source of variability on radiomic features (RFs). Methods After Ethics Committee approval, 70 liver metastases in 17 CRC patients were segmented on contrast-enhanced computed tomography scans by two residents and checked by experienced radiologists. RFs from grey level co-occurrence and run length matrices were extracted from three-dimensional (3D) regions of interest (ROIs) and the largest two-dimensional (2D) ROIs. Inter-reader variability was evaluated with Dice coefficient and Hausdorff distance, whilst its impact on RFs was assessed using mean relative change (MRC) and intraclass correlation coefficient (ICC). For the main lesion of each patient, one reader also segmented a circular ROI on the same image used for the 2D ROI. Results The best inter-reader contouring agreement was observed for 2D ROIs according to both Dice coefficient (median 0.85, interquartile range 0.78–0.89) and Hausdorff distance (0.21 mm, 0.14–0.31 mm). Comparing RF values, MRC ranged 0–752% for 2D and 0–1567% for 3D. For 24/32 RFs (75%), MRC was lower for 2D than for 3D. An ICC > 0.90 was observed for more RFs for 2D (53%) than for 3D (34%). Only 2/32 RFs (6%) showed a variability between 2D and circular ROIs higher than inter-reader variability. Conclusions A 2D contouring approach may help mitigate overall inter-reader variability, albeit stable RFs can be extracted from both 3D and 2D segmentations of CRC liver metastases.


2021 ◽  
Vol 30 (1) ◽  
pp. 893-902
Author(s):  
Ke Xu

Abstract A portrait recognition system can play an important role in emergency evacuation in mass emergencies. This paper designed a portrait recognition system, analyzed the overall structure of the system and the method of image preprocessing, and used the Single Shot MultiBox Detector (SSD) algorithm for portrait detection. It also designed an improved algorithm combining principal component analysis (PCA) with linear discriminant analysis (LDA) for portrait recognition and tested the system by applying it in a shopping mall to collect and monitor the portrait and establish a data set. The results showed that the missing detection rate and false detection rate of the SSD algorithm were 0.78 and 2.89%, respectively, which were lower than those of the AdaBoost algorithm. Comparisons with PCA, LDA, and PCA + LDA algorithms demonstrated that the recognition rate of the improved PCA + LDA algorithm was the highest, which was 95.8%, the area under the receiver operating characteristic curve was the largest, and the recognition time was the shortest, which was 465 ms. The experimental results show that the improved PCA + LDA algorithm is reliable in portrait recognition and can be used for emergency evacuation in mass emergencies.


2016 ◽  
Vol 14 (1) ◽  
pp. 172988141769231 ◽  
Author(s):  
Yingfeng Cai ◽  
Youguo He ◽  
Hai Wang ◽  
Xiaoqiang Sun ◽  
Long Chen ◽  
...  

The emergence and development of deep learning theory in machine learning field provide new method for visual-based pedestrian recognition technology. To achieve better performance in this application, an improved weakly supervised hierarchical deep learning pedestrian recognition algorithm with two-dimensional deep belief networks is proposed. The improvements are made by taking into consideration the weaknesses of structure and training methods of existing classifiers. First, traditional one-dimensional deep belief network is expanded to two-dimensional that allows image matrix to be loaded directly to preserve more information of a sample space. Then, a determination regularization term with small weight is added to the traditional unsupervised training objective function. By this modification, original unsupervised training is transformed to weakly supervised training. Subsequently, that gives the extracted features discrimination ability. Multiple sets of comparative experiments show that the performance of the proposed algorithm is better than other deep learning algorithms in recognition rate and outperforms most of the existing state-of-the-art methods in non-occlusion pedestrian data set while performs fair in weakly and heavily occlusion data set.


2021 ◽  
Vol 11 (6) ◽  
pp. 1592-1598
Author(s):  
Xufei Liu

The early detection of cardiovascular diseases based on electrocardiogram (ECG) is very important for the timely treatment of cardiovascular patients, which increases the survival rate of patients. ECG is a visual representation that describes changes in cardiac bioelectricity and is the basis for detecting heart health. With the rise of edge machine learning and Internet of Things (IoT) technologies, small machine learning models have received attention. This study proposes an ECG automatic classification method based on Internet of Things technology and LSTM network to achieve early monitoring and early prevention of cardiovascular diseases. Specifically, this paper first proposes a single-layer bidirectional LSTM network structure. Make full use of the timing-dependent features of the sampling points before and after to automatically extract features. The network structure is more lightweight and the calculation complexity is lower. In order to verify the effectiveness of the proposed classification model, the relevant comparison algorithm is used to verify on the MIT-BIH public data set. Secondly, the model is embedded in a wearable device to automatically classify the collected ECG. Finally, when an abnormality is detected, the user is alerted by an alarm. The experimental results show that the proposed model has a simple structure and a high classification and recognition rate, which can meet the needs of wearable devices for monitoring ECG of patients.


2004 ◽  
Vol 87 (5) ◽  
pp. 1153-1163 ◽  
Author(s):  
Manuela Buchgraber ◽  
Chiara Senaldi ◽  
Franz Ulberth ◽  
Elke Anklam

Abstract The development and in-house testing of a method for the detection and quantification of cocoa butter equivalents in cocoa butter and plain chocolate is described. A database consisting of the triacylglycerol profile of 74 genuine cocoa butter and 75 cocoa butter equivalent samples obtained by high-resolution capillary gas liquid chromatography was created, using a certified cocoa butter reference material (IRMM-801) for calibration purposes. Based on these data, a large number of cocoa butter/cocoa butter equivalent mixtures were arithmetically simulated. By subjecting the data set to various statistical tools, reliable models for both detection (univariate regression model) and quantification (multivariate model) were elaborated. Validation data sets consisting of a large number of samples (n = 4050 for detection, n = 1050 for quantification) were used to test the models. Excluding pure illipé fat samples from the data set, the detection limit was determined between 1 and 3% foreign fat in cocoa butter. Recalculated for a chocolate with a fat content of 30%, these figures are equal to 0.3–0.9% cocoa butter equivalent. For quantification, the average error for prediction was estimated to be 1.1% cocoa butter equivalent in cocoa butter, without prior knowledge of the materials used in the blend corresponding to 0.3% in chocolate (fat content 30%). The advantage of the approach is that by using IRMM-801 for calibration, the established mathematical decision rules can be transferred to every testing laboratory.


Thorax ◽  
2017 ◽  
Vol 73 (4) ◽  
pp. 339-349 ◽  
Author(s):  
Margreet Lüchtenborg ◽  
Eva J A Morris ◽  
Daniela Tataru ◽  
Victoria H Coupland ◽  
Andrew Smith ◽  
...  

IntroductionThe International Cancer Benchmarking Partnership (ICBP) identified significant international differences in lung cancer survival. Differing levels of comorbid disease across ICBP countries has been suggested as a potential explanation of this variation but, to date, no studies have quantified its impact. This study investigated whether comparable, robust comorbidity scores can be derived from the different routine population-based cancer data sets available in the ICBP jurisdictions and, if so, use them to quantify international variation in comorbidity and determine its influence on outcome.MethodsLinked population-based lung cancer registry and hospital discharge data sets were acquired from nine ICBP jurisdictions in Australia, Canada, Norway and the UK providing a study population of 233 981 individuals. For each person in this cohort Charlson, Elixhauser and inpatient bed day Comorbidity Scores were derived relating to the 4–36 months prior to their lung cancer diagnosis. The scores were then compared to assess their validity and feasibility of use in international survival comparisons.ResultsIt was feasible to generate the three comorbidity scores for each jurisdiction, which were found to have good content, face and concurrent validity. Predictive validity was limited and there was evidence that the reliability was questionable.ConclusionThe results presented here indicate that interjurisdictional comparability of recorded comorbidity was limited due to probable differences in coding and hospital admission practices in each area. Before the contribution of comorbidity on international differences in cancer survival can be investigated an internationally harmonised comorbidity index is required.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
BinBin Zhang ◽  
Fumin Zhang ◽  
Xinghua Qu

Purpose Laser-based measurement techniques offer various advantages over conventional measurement techniques, such as no-destructive, no-contact, fast and long measuring distance. In cooperative laser ranging systems, it’s crucial to extract center coordinates of retroreflectors to accomplish automatic measurement. To solve this problem, this paper aims to propose a novel method. Design/methodology/approach We propose a method using Mask RCNN (Region Convolutional Neural Network), with ResNet101 (Residual Network 101) and FPN (Feature Pyramid Network) as the backbone, to localize retroreflectors, realizing automatic recognition in different backgrounds. Compared with two other deep learning algorithms, experiments show that the recognition rate of Mask RCNN is better especially for small-scale targets. Based on this, an ellipse detection algorithm is introduced to obtain the ellipses of retroreflectors from recognized target areas. The center coordinates of retroreflectors in the camera coordinate system are obtained by using a mathematics method. Findings To verify the accuracy of this method, an experiment was carried out: the distance between two retroreflectors with a known distance of 1,000.109 mm was measured, with 2.596 mm root-mean-squar error, meeting the requirements of the coarse location of retroreflectors. Research limitations/implications The research limitations/implications are as follows: (i) As the data set only has 200 pictures, although we have used some data augmentation methods such as rotating, mirroring and cropping, there is still room for improvement in the generalization ability of detection. (ii) The ellipse detection algorithm needs to work in relatively dark conditions, as the retroreflector is made of stainless steel, which easily reflects light. Originality/value The originality/value of the article lies in being able to obtain center coordinates of multiple retroreflectors automatically even in a cluttered background; being able to recognize retroreflectors with different sizes, especially for small targets; meeting the recognition requirement of multiple targets in a large field of view and obtaining 3 D centers of targets by monocular model-based vision.


Author(s):  
UJJWAL BHATTACHARYA ◽  
TANMOY KANTI DAS ◽  
AMITAVA DATTA ◽  
SWAPAN KUMAR PARUI ◽  
BIDYUT BARAN CHAUDHURI

This paper proposes a novel approach to automatic recognition of handprinted Bangla (an Indian script) numerals. A modified Topology Adaptive Self-Organizing Neural Network is proposed to extract a vector skeleton from a binary numeral image. Simple heuristics are considered to prune artifacts, if any, in such a skeletal shape. Certain topological and structural features like loops, junctions, positions of terminal nodes, etc. are used along with a hierarchical tree classifier to classify handwritten numerals into smaller subgroups. Multilayer perceptron (MLP) networks are then employed to uniquely classify the numerals belonging to each subgroup. The system is trained using a sample data set of 1800 numerals and we have obtained 93.26% correct recognition rate and 1.71% rejection on a separate test set of another 7760 samples. In addition, a validation set consisting of 1440 samples has been used to determine the termination of the training algorithm of the MLP networks. The proposed scheme is sufficiently robust with respect to considerable object noise.


2019 ◽  
Vol 15 (1) ◽  
pp. 141-146
Author(s):  
John P. Corbett ◽  
Marc D. Breton ◽  
Stephen D. Patek

Introduction: It is important to have accurate information regarding when individuals with type 1 diabetes have eaten and taken insulin to reconcile those events with their blood glucose levels throughout the day. Insulin pumps and connected insulin pens provide records of when the user injected insulin and how many carbohydrates were recorded, but it is often unclear when meals occurred. This project demonstrates a method to estimate meal times using a multiple hypothesis approach. Methods: When an insulin dose is recorded, multiple hypotheses were generated describing variations of when the meal in question occurred. As postprandial glucose values informed the model, the posterior probability of the truth of each hypothesis was evaluated, and from these posterior probabilities, an expected meal time was found. This method was tested using simulation and a clinical data set ( n = 11) and with either uniform or normally distributed ( μ = 0, σ = 10 or 20 minutes) prior probabilities for the hypothesis set. Results: For the simulation data set, meals were estimated with an average error of −0.77 (±7.94) minutes when uniform priors were used and −0.99 (±8.55) and −0.88 (±7.84) for normally distributed priors ( σ = 10 and 20 minutes). For the clinical data set, the average estimation error was 0.02 (±30.87), 1.38 (±21.58), and 0.04 (±27.52) for the uniform priors and normal priors ( σ = 10 and 20 minutes). Conclusion: This technique could be used to help advise physicians about the meal time insulin dosing behaviors of their patients and potentially influence changes in their treatment strategy.


Sign in / Sign up

Export Citation Format

Share Document