scholarly journals Accurate pancreas segmentation using multi-level pyramidal pooling residual U-Net with adversarial mechanism

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Meiyu Li ◽  
Fenghui Lian ◽  
Chunyu Wang ◽  
Shuxu Guo

Abstract Background A novel multi-level pyramidal pooling residual U-Net with adversarial mechanism was proposed for organ segmentation from medical imaging, and was conducted on the challenging NIH Pancreas-CT dataset. Methods The 82 pancreatic contrast-enhanced abdominal CT volumes were split via four-fold cross validation to test the model performance. In order to achieve accurate segmentation, we firstly involved residual learning into an adversarial U-Net to achieve a better gradient information flow for improving segmentation performance. Then, we introduced a multi-level pyramidal pooling module (MLPP), where a novel pyramidal pooling was involved to gather contextual information for segmentation, then four groups of structures consisted of a different number of pyramidal pooling blocks were proposed to search for the structure with the optimal performance, and two types of pooling blocks were applied in the experimental section to further assess the robustness of MLPP for pancreas segmentation. For evaluation, Dice similarity coefficient (DSC) and recall were used as the metrics in this work. Results The proposed method preceded the baseline network 5.30% and 6.16% on metrics DSC and recall, and achieved competitive results compared with the-state-of-art methods. Conclusions Our algorithm showed great segmentation performance even on the particularly challenging pancreas dataset, this indicates that the proposed model is a satisfactory and promising segmentor.

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Mingzhu Tang ◽  
Xiangwan Fu ◽  
Huawei Wu ◽  
Qi Huang ◽  
Qi Zhao

Traffic flow anomaly detection is helpful to improve the efficiency and reliability of detecting fault behavior and the overall effectiveness of the traffic operation. The data detected by the traffic flow sensor contains a lot of noise due to equipment failure, environmental interference, and other factors. In the case of large traffic flow data noises, a traffic flow anomaly detection method based on robust ridge regression with particle swarm optimization (PSO) algorithm is proposed. Feature sets containing historical characteristics with a strong linear correlation and statistical characteristics using the optimal sliding window are constructed. Then by providing the feature sets inputs to the PSO-Huber-Ridge model and the model outputs the traffic flow. The Huber loss function is recommended to reduce noise interference in the traffic flow. The L2 regular term of the ridge regression is employed to reduce the degree of overfitting of the model training. A fitness function is constructed, which can balance the relative size between the k-fold cross-validation root mean square error and the k-fold cross-validation average absolute error with the control parameter η to improve the optimization efficiency of the optimization algorithm and the generalization ability of the proposed model. The hyperparameters of the robust ridge regression forecast model are optimized by the PSO algorithm to obtain the optimal hyperparameters. The traffic flow data set is used to train and validate the proposed model. Compared with other optimization methods, the proposed model has the lowest RMSE, MAE, and MAPE. Finally, the traffic flow that forecasted by the proposed model is used to perform anomaly detection. The abnormality of the error between the forecasted value and the actual value is detected by the abnormal traffic flow threshold based on the sliding window. The experimental results verify the validity of the proposed anomaly detection model.


2018 ◽  
Vol 7 (2.27) ◽  
pp. 93
Author(s):  
Pooja Thakur ◽  
Mandeep Singh ◽  
Harpreet Singh ◽  
Prashant Singh Rana

H1B work visas are utilized to contract profoundly talented outside specialists at low wages in America which help firms and impact U.S economy unfavorably. In excess of 100,000 individuals for every year apply tight clamp for higher examinations and also to work and number builds each year. Selections of foreigners are done by lottery system which doesn’t follow any full proofed method and so results cause a loophole between US-based and foreign workers. We endeavor to examine petitions filled from 2015 to 2017 with the goal that a superior prediction model need to develop using machine learning which helps to foresee the aftereffect of the request of ahead of time which shows whether an appeal to is commendable or not. In this work, we use seven classification models Decision tree, C5.0, Random Forest, Naïve Bayes, Neural Network and SVM which predict the status of a petition as certified, denied, withdrawal or certified with-drawls. The predictions of these models are checked on accuracy parameter. It is found that C5.0 outperform with the best accuracy of 94.62 as a single model but proposed model gives better results of 95.4 accuracies which is built by machine ensemble method and this is validated by 10 fold cross-validation. 


Soil Research ◽  
2011 ◽  
Vol 49 (4) ◽  
pp. 305 ◽  
Author(s):  
Brian Horton ◽  
Ross Corkrey

Soil temperatures are related to air temperature and rainfall on the current day and preceding days, and this can be expressed in a non-linear relationship to provide a weighted value for the effect of air temperature or rainfall based on days lag and soil depth. The weighted minimum and maximum air temperatures and weighted rainfall can then be combined with latitude and a seasonal function to estimate soil temperature at any depth in the range 5–100 cm. The model had a root mean square deviation of 1.21–1.85°C for minimum, average, and maximum soil temperature for all weather stations in Australia (mainland and Tasmania), except for maximum soil temperature at 5 and 10 cm, where the model was less precise (3.39° and 2.52°, respectively). Data for this analysis were obtained from 32–40 Bureau of Meteorology weather stations throughout Australia and the proposed model was validated using 5-fold cross-validation.


2021 ◽  
Author(s):  
Shazia Murad ◽  
Arwa Mashat ◽  
Alia Mahfooz ◽  
Sher Afzal Khan ◽  
Omar Barukab

Abstract Ubiquitination is the process that supports the growth and development of eukaryotic and prokaryotic organisms. It is helpful in regulating numerous functions such as the cell division cycle, caspase-mediated cell death, maintenance of protein transcription, signal transduction, and restoration of DNA damage. Because of these properties, its identification is essential to understand its molecular mechanism. Some traditional methods such as mass spectrometry and site-directed mutagenesis are used for this purpose, but they are tedious and time consuming. In order to overcome such limitations, interest in computational models of this type of identification is therefore being developed. In this study, an accurate and efficient classification model for identifying ubiquitination sites was constructed. The proposed model uses statistical moments for feature extraction along with random forest for classification. Three sets of ubiquitination are used to train and test the model. The model is assessed through 10-fold cross-validation and jackknife tests. We achieved a 10-fold accuracy of 100% for dataset-1, 99.88% for dataset-2 and 99.84% for the dataset-3, while with Jackknife test we got 100% for the dataset-1, 99.91% for dataset-2 and 99.99%. for the dataset-3. The results obtained are almost the maximum, which is far better as compared to the pre-existing models available in the literature.


2020 ◽  
Author(s):  
Young Jae Kim ◽  
Eun Young Yoo ◽  
Kwang Gi Kim

Abstract Background: The purpose of this study was to propose a deep learning-based method for automated detection of the pectoral muscle, in order to reduce misdetection in a computer-aided diagnosis (CAD) system for diagnosing breast cancer in mammography. This study also aimed to assess the performance of the deep learning method for pectoral muscle detection by comparing it to an image processing-based method using the random sample consensus (RANSAC) algorithm. Methods: Using the 322 images in the Mammographic Image Analysis Society (MIAS) database, the pectoral muscle detection model was trained with the U-Net architecture. Of the total data, 80% was allocated as training data and 20% was allocated as test data, and the performance of the deep learning model was tested by 5-fold cross validation. Results: The image processing-based method for pectoral muscle detection using RANSAC showed 92% detection accuracy. Using the 5-fold cross validation, the deep learning-based method showed a mean sensitivity of 95.55%, mean specificity of 99.88%, mean accuracy of 99.67%, and mean Dice similarity coefficient (DSC) of 95.88%. Conclusions: The proposed deep learning-based method of pectoral muscle detection performed better than an existing image processing-based method. In the future, by collecting data from various medical institutions and devices to further train the model and improve its reliability, we expect that this model could greatly reduce misdetection rates by CAD systems for breast cancer diagnosis.


Author(s):  
Shawni Dutta ◽  
Samir Kumar Bandyopadhyay

For enhancing the maximized profit from bank as well as customer perspective, term deposit can accelerate finance fields. This paper focuses on likelihood of term deposit subscription taken by the customers. Bank campaign efforts and customer details are influential while considering possibilities of taking term deposit subscription. An automated system is provided in this paper that approaches towards prediction of term deposit investment possibilities in advance. Neural network along with stratified 10-fold cross-validation methodology is proposed as predictive model which is later compared with other benchmark classifiers such as k-Nearest Neighbor (k-NN), Decision tree classifier (DT), and Multi-layer perceptron classifier (MLP). Experimental study concluded that proposed model provides significant prediction results over other baseline models with an accuracy of 88.32% and MSE of 0.1168.


Author(s):  
Edouard Berton ◽  
Najib Bouaanani ◽  
Charles-Philippe Lamarche ◽  
Nathalie Roy

Vehicle-bridge collisions (VBCs) can compromise the safety of road users and cause major economic losses. This paper proposes and applies a methodology to investigate such events in Quebec. Relevant data have been collected from various sources and merged to provide a comprehensive database of VBCs which occurred in Quebec between 2000 and 2016. The developed database was used to carry out statistical analyses highlighting the main factors characterizing VBCs, such as vehicle’s body type, bridge dimensions, prescribed speed limit, road configuration, road surface condition and lighting. The compiled database was georeferenced in an upgradable map which can be used efficiently to visualize the distribution/evolution of VBCs over a given region of Quebec. A VBC regression model was also developed based on k-fold cross-validation. The proposed model can be updated regularly as new VBCs are reported and then used to identify bridges most likely to be affected by VBCs or prioritize actions to reduce the potential consequences.


2020 ◽  
Vol 34 (01) ◽  
pp. 1096-1103 ◽  
Author(s):  
Kai-Cheng Yang ◽  
Onur Varol ◽  
Pik-Mai Hui ◽  
Filippo Menczer

Efficient and reliable social bot classification is crucial for detecting information manipulation on social media. Despite rapid development, state-of-the-art bot detection models still face generalization and scalability challenges, which greatly limit their applications. In this paper we propose a framework that uses minimal account metadata, enabling efficient analysis that scales up to handle the full stream of public tweets of Twitter in real time. To ensure model accuracy, we build a rich collection of labeled datasets for training and validation. We deploy a strict validation system so that model performance on unseen datasets is also optimized, in addition to traditional cross-validation. We find that strategically selecting a subset of training data yields better model accuracy and generalization than exhaustively training on all available data. Thanks to the simplicity of the proposed model, its logic can be interpreted to provide insights into social bot characteristics.


2020 ◽  
Vol 10 (10) ◽  
pp. 3360
Author(s):  
Mizuho Nishio ◽  
Shunjiro Noguchi ◽  
Koji Fujimoto

Combinations of data augmentation methods and deep learning architectures for automatic pancreas segmentation on CT images are proposed and evaluated. Images from a public CT dataset of pancreas segmentation were used to evaluate the models. Baseline U-net and deep U-net were chosen for the deep learning models of pancreas segmentation. Methods of data augmentation included conventional methods, mixup, and random image cropping and patching (RICAP). Ten combinations of the deep learning models and the data augmentation methods were evaluated. Four-fold cross validation was performed to train and evaluate these models with data augmentation methods. The dice similarity coefficient (DSC) was calculated between automatic segmentation results and manually annotated labels and these were visually assessed by two radiologists. The performance of the deep U-net was better than that of the baseline U-net with mean DSC of 0.703–0.789 and 0.686–0.748, respectively. In both baseline U-net and deep U-net, the methods with data augmentation performed better than methods with no data augmentation, and mixup and RICAP were more useful than the conventional method. The best mean DSC was obtained using a combination of deep U-net, mixup, and RICAP, and the two radiologists scored the results from this model as good or perfect in 76 and 74 of the 82 cases.


Sign in / Sign up

Export Citation Format

Share Document