CORSegNet: Deep Neural Network for Core Object Segmentation on Medical Images

2021 ◽  
Vol 11 (5) ◽  
pp. 1364-1371
Author(s):  
Ching Wai Yong ◽  
Kareen Teo ◽  
Belinda Pingguan Murphy ◽  
Yan Chai Hum ◽  
Khin Wee Lai

In recent decades, convolutional neural networks (CNNs) have delivered promising results in vision-related tasks across different domains. Previous studies have introduced deeper network architectures to further improve the performances of object classification, localization, and segmentation. However, this induces the complexity in mapping network’s layer to the processing elements in the ventral visual pathway. Although CORnet models are not precisely biomimetic, they are closer approximations to the anatomy of ventral visual pathway compared with other deep neural networks. The uniqueness of this architecture inspires us to extend it into a core object segmentation network, CORSegnet-Z. This architecture utilizes CORnet-Z building blocks as the encoding elements. We train and evaluate the proposed model using two large datasets. Our proposed model shows significant improvements on the segmentation metrics in delineating cartilage tissues from knee magnetic resonance (MR) images and segmenting lesion boundary from dermoscopic images.

2019 ◽  
Vol 39 (33) ◽  
pp. 6513-6525 ◽  
Author(s):  
Stefania Bracci ◽  
J. Brendan Ritchie ◽  
Ioannis Kalfas ◽  
Hans P. Op de Beeck

2021 ◽  
Vol 18 (2) ◽  
pp. 40-55
Author(s):  
Lídio Mauro Lima Campos ◽  
◽  
Jherson Haryson Almeida Pereira ◽  
Danilo Souza Duarte ◽  
Roberto Célio Limão Oliveira ◽  
...  

The aim of this paper is to introduce a biologically inspired approach that can automatically generate Deep Neural networks with good prediction capacity, smaller error and large tolerance to noises. In order to do this, three biological paradigms were used: Genetic Algorithm (GA), Lindenmayer System and Neural Networks (DNNs). The final sections of the paper present some experiments aimed at investigating the possibilities of the method in the forecast the price of energy in the Brazilian market. The proposed model considers a multi-step ahead price prediction (12, 24, and 36 weeks ahead). The results for MLP and LSTM networks show a good ability to predict peaks and satisfactory accuracy according to error measures comparing with other methods.


2019 ◽  
Vol 31 (3) ◽  
pp. 538-554
Author(s):  
Michael Hauser ◽  
Sean Gunn ◽  
Samer Saab ◽  
Asok Ray

This letter deals with neural networks as dynamical systems governed by finite difference equations. It shows that the introduction of [Formula: see text]-many skip connections into network architectures, such as residual networks and additive dense networks, defines [Formula: see text]th order dynamical equations on the layer-wise transformations. Closed-form solutions for the state-space representations of general [Formula: see text]th order additive dense networks, where the concatenation operation is replaced by addition, as well as [Formula: see text]th order smooth networks, are found. The developed provision endows deep neural networks with an algebraic structure. Furthermore, it is shown that imposing [Formula: see text]th order smoothness on network architectures with [Formula: see text]-many nodes per layer increases the state-space dimension by a multiple of [Formula: see text], and so the effective embedding dimension of the data manifold by the neural network is [Formula: see text]-many dimensions. It follows that network architectures of these types reduce the number of parameters needed to maintain the same embedding dimension by a factor of [Formula: see text] when compared to an equivalent first-order, residual network. Numerical simulations and experiments on CIFAR10, SVHN, and MNIST have been conducted to help understand the developed theory and efficacy of the proposed concepts.


Algorithms ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 342
Author(s):  
Guojing Huang ◽  
Qingliang Chen ◽  
Congjian Deng

With the development of E-commerce, online advertising began to thrive and has gradually developed into a new mode of business, of which Click-Through Rates (CTR) prediction is the essential driving technology. Given a user, commodities and scenarios, the CTR model can predict the user’s click probability of an online advertisement. Recently, great progress has been made with the introduction of Deep Neural Networks (DNN) into CTR. In order to further advance the DNN-based CTR prediction models, this paper introduces a new model of FO-FTRL-DCN, based on the prestigious model of Deep&Cross Network (DCN) augmented with the latest optimization technique of Follow The Regularized Leader (FTRL) for DNN. The extensive comparative experiments on the iPinYou datasets show that the proposed model has outperformed other state-of-the-art baselines, with better generalization across different datasets in the benchmark.


Author(s):  
Vikas Verma ◽  
Alex Lamb ◽  
Juho Kannala ◽  
Yoshua Bengio ◽  
David Lopez-Paz

We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark dataset.


Author(s):  
Yusuke Iwasawa ◽  
Kotaro Nakayama ◽  
Ikuko Yairi ◽  
Yutaka Matsuo

Deep neural networks have been successfully applied to activity recognition with wearables in terms of recognition performance. However, the black-box nature of neural networks could lead to privacy concerns. Namely, generally it is hard to expect what neural networks learn from data, and so they possibly learn features that highly discriminate user-information unintentionally, which increases the risk of information-disclosure. In this study, we analyzed the features learned by conventional deep neural networks when applied to data of wearables to confirm this phenomenon.Based on the results of our analysis, we propose the use of an adversarial training framework to suppress the risk of sensitive/unintended information disclosure. Our proposed model considers both an adversarial user classifier and a regular activity-classifier during training, which allows the model to learn representations that help the classifier to distinguish the activities but which, at the same time, prevents it from accessing user-discriminative information. This paper provides an empirical validation of the privacy issue and efficacy of the proposed method using three activity recognition tasks based on data of wearables. The empirical validation shows that our proposed method suppresses the concerns without any significant performance degradation, compared to conventional deep nets on all three tasks.


Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

Abstract Deep Neural Networks (DNNs) have shown great success in many fields. Various network architectures have been developed for different applications. Regardless of the complexities of the networks, DNNs do not provide model uncertainty. Bayesian Neural Networks (BNNs), on the other hand, is able to make probabilistic inference. Among various types of BNNs, Dropout as a Bayesian Approximation converts a Neural Network (NN) to a BNN by adding a dropout layer after each weight layer in the NN. This technique provides a simple transformation from a NN to a BNN. However, for DNNs, adding a dropout layer to each weight layer would lead to a strong regularization due to the deep architecture. Previous researches [1, 2, 3] have shown that adding a dropout layer after each weight layer in a DNN is unnecessary. However, how to place dropout layers in a ResNet for regression tasks are less explored. In this work, we perform an empirical study on how different dropout placements would affect the performance of a Bayesian DNN. We use a regression model modified from ResNet as the DNN and place the dropout layers at different places in the regression ResNet. Our experimental results show that it is not necessary to add a dropout layer after every weight layer in the Regression ResNet to let it be able to make Bayesian Inference. Placing Dropout layers between the stacked blocks i.e. Dense+Identity+Identity blocks has the best performance in Predictive Interval Coverage Probability (PICP). Placing a dropout layer after each stacked block has the best performance in Root Mean Square Error (RMSE).


Sign in / Sign up

Export Citation Format

Share Document