scholarly journals GourmetNet: Food Segmentation Using Multi-Scale Waterfall Features with Spatial and Channel Attention

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7504
Author(s):  
Udit Sharma ◽  
Bruno Artacho ◽  
Andreas Savakis

We propose GourmetNet, a single-pass, end-to-end trainable network for food segmentation that achieves state-of-the-art performance. Food segmentation is an important problem as the first step for nutrition monitoring, food volume and calorie estimation. Our novel architecture incorporates both channel attention and spatial attention information in an expanded multi-scale feature representation using our advanced Waterfall Atrous Spatial Pooling module. GourmetNet refines the feature extraction process by merging features from multiple levels of the backbone through the two attention modules. The refined features are processed with the advanced multi-scale waterfall module that combines the benefits of cascade filtering and pyramid representations without requiring a separate decoder or post-processing. Our experiments on two food datasets show that GourmetNet significantly outperforms existing current state-of-the-art methods.

2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Daobin Huang ◽  
Minghui Wang ◽  
Ling Zhang ◽  
Haichun Li ◽  
Minquan Ye ◽  
...  

Abstract Background Accurately segment the tumor region of MRI images is important for brain tumor diagnosis and radiotherapy planning. At present, manual segmentation is wildly adopted in clinical and there is a strong need for an automatic and objective system to alleviate the workload of radiologists. Methods We propose a parallel multi-scale feature fusing architecture to generate rich feature representation for accurate brain tumor segmentation. It comprises two parts: (1) Feature Extraction Network (FEN) for brain tumor feature extraction at different levels and (2) Multi-scale Feature Fusing Network (MSFFN) for merge all different scale features in a parallel manner. In addition, we use two hybrid loss functions to optimize the proposed network for the class imbalance issue. Results We validate our method on BRATS 2015, with 0.86, 0.73 and 0.61 in Dice for the three tumor regions (complete, core and enhancing), and the model parameter size is only 6.3 MB. Without any post-processing operations, our method still outperforms published state-of-the-arts methods on the segmentation results of complete tumor regions and obtains competitive performance in another two regions. Conclusions The proposed parallel structure can effectively fuse multi-level features to generate rich feature representation for high-resolution results. Moreover, the hybrid loss functions can alleviate the class imbalance issue and guide the training process. The proposed method can be used in other medical segmentation tasks.


2011 ◽  
Author(s):  
David Fornaro

Finite Element Analysis (FEA) is mature technology that has been in use for several decades as a tool to optimize structures for a wide variety of applications. Its application to composite structures is not new, however the technology for modeling and analyzing the behavior of composite structures continues to evolve on several fronts. This paper provides a review of the current state-of-the-art with regard to composites FEA, with a particular emphasis on applications to yacht structures. Topics covered are divided into three categories: Pre-processing; Postprocessing; and Non-linear Solutions. Pre-processing topics include meshing, ply properties, laminate definitions, element orientations, global ply tracking and load case development. Post-processing topics include principal stresses, failure indices and strength ratios. Nonlinear solution topics include progressive ply failure. Examples are included to highlight the application of advanced finite element analysis methodologies to the optimization of composite yacht structures.


2020 ◽  
Vol 34 (07) ◽  
pp. 11037-11044
Author(s):  
Lianghua Huang ◽  
Xin Zhao ◽  
Kaiqi Huang

A key capability of a long-term tracker is to search for targets in very large areas (typically the entire image) to handle possible target absences or tracking failures. However, currently there is a lack of such a strong baseline for global instance search. In this work, we aim to bridge this gap. Specifically, we propose GlobalTrack, a pure global instance search based tracker that makes no assumption on the temporal consistency of the target's positions and scales. GlobalTrack is developed based on two-stage object detectors, and it is able to perform full-image and multi-scale search of arbitrary instances with only a single query as the guide. We further propose a cross-query loss to improve the robustness of our approach against distractors. With no online learning, no punishment on position or scale changes, no scale smoothing and no trajectory refinement, our pure global instance search based tracker achieves comparable, sometimes much better performance on four large-scale tracking benchmarks (i.e., 52.1% AUC on LaSOT, 63.8% success rate on TLP, 60.3% MaxGM on OxUvA and 75.4% normalized precision on TrackingNet), compared to state-of-the-art approaches that typically require complex post-processing. More importantly, our tracker runs without cumulative errors, i.e., any type of temporary tracking failures will not affect its performance on future frames, making it ideal for long-term tracking. We hope this work will be a strong baseline for long-term tracking and will stimulate future works in this area.


Author(s):  
Yanbing Geng ◽  
Yongjian Lian ◽  
Shunmin Yang ◽  
Mingliang Zhou ◽  
Jingchao Cao

Person Re-ID is challenged by background clutter, body misalignment and part missing. In this paper, we propose a reliable part-based multiple levels attention deep network to learn multiple scales salience representation. In particular, person alignment and key point detection are sequentially carried out to locate three relative stable body components, then fused attention (FA) mode is designed to capture the fine-grained salient features from effective spatial of valuable channels of each part, regional attention mode is succeeded to weight the importance of different parts for highlighting the representative parts while suppressing the valueless ones. A late fusion-based multiple-task loss is finally adopted to further optimize the valuable feature representation. Experimental results demonstrate that the proposed method achieves state-of-the-art performances on three challenging benchmarks: Market-1501, DukeMTMC-reID and CUHK03.


2017 ◽  
Vol 2017 ◽  
pp. 1-11
Author(s):  
Yingsheng Ye ◽  
Xingming Zhang ◽  
Wing W. Y. Ng

Accompanying the growth of surveillance infrastructures, surveillance IP cameras mount up rapidly, crowding Internet of Things (IoT) with countless surveillance frames and increasing the need of person reidentification (Re-ID) in video searching for surveillance and forensic fields. In real scenarios, performance of current proposed Re-ID methods suffers from pose and viewpoint variations due to feature extraction containing background pixels and fixed feature selection strategy for pose and viewpoint variations. To deal with pose and viewpoint variations, we propose the color distribution pattern metric (CDPM) method, employing color distribution pattern (CDP) for feature representation and SVM for classification. Different from other methods, CDP does not extract features over a certain number of dense blocks and is free from varied pedestrian image resolutions and resizing distortion. Moreover, it provides more precise features with less background influences under different body types, severe pose variations, and viewpoint variations. Experimental results show that our CDPM method achieves state-of-the-art performance on both 3DPeS dataset and ImageLab Pedestrian Recognition dataset with 68.8% and 79.8% rank 1 accuracy, respectively, under the single-shot experimental setting.


2019 ◽  
Vol 12 (2) ◽  
pp. 103
Author(s):  
Kuntoro Adi Nugroho ◽  
Yudi Eko Windarto

Various methods are available to perform feature extraction on satellite images. Among the available alternatives, deep convolutional neural network (ConvNet) is the state of the art method. Although previous studies have reported successful attempts on developing and implementing ConvNet on remote sensing application, several issues are not well explored, such as the use of depthwise convolution, final pooling layer size, and comparison between grayscale and RGB settings. The objective of this study is to perform analysis to address these issues. Two feature learning algorithms were proposed, namely ConvNet as the current state of the art for satellite image classification and Gray Level Co-occurence Matrix (GLCM) which represents a classic unsupervised feature extraction method. The experiment demonstrated consistent result with previous studies that ConvNet is superior in most cases compared to GLCM, especially with 3x3xn final pooling. The performance of the learning algorithms are much higher on features from RGB channels, except for ConvNet with relatively small number of features.


2021 ◽  
Author(s):  
Xiangchun Li ◽  
Xilin Shen

Integration of the evolving large-scale single-cell transcriptomes requires scalable batch-correction approaches. Here we propose a simple batch-correction method that is scalable for integrating super large-scale single-cell transcriptomes from diverse sources. The core idea of the method is encoding batch information of each cell as a trainable parameter and added to its expression profile; subsequently, a contrastive learning approach is used to learn feature representation of the additive expression profile. We demonstrate the scalability of the proposed method by integrating 18 million cells obtained from the Human Cell Atlas. Our benchmark comparisons with current state-of-the-art single-cell integration methods demonstrated that our method could achieve comparable data alignment and cluster preservation. Our study would facilitate the integration of super large-scale single-cell transcriptomes. The source code is available at https://github.com/xilinshen/Fugue.


Information ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 32
Author(s):  
Gang Sun ◽  
Hancheng Yu ◽  
Xiangtao Jiang ◽  
Mingkui Feng

Edge detection is one of the fundamental computer vision tasks. Recent methods for edge detection based on a convolutional neural network (CNN) typically employ the weighted cross-entropy loss. Their predicted results being thick and needing post-processing before calculating the optimal dataset scale (ODS) F-measure for evaluation. To achieve end-to-end training, we propose a non-maximum suppression layer (NMS) to obtain sharp boundaries without the need for post-processing. The ODS F-measure can be calculated based on these sharp boundaries. So, the ODS F-measure loss function is proposed to train the network. Besides, we propose an adaptive multi-level feature pyramid network (AFPN) to better fuse different levels of features. Furthermore, to enrich multi-scale features learned by AFPN, we introduce a pyramid context module (PCM) that includes dilated convolution to extract multi-scale features. Experimental results indicate that the proposed AFPN achieves state-of-the-art performance on the BSDS500 dataset (ODS F-score of 0.837) and the NYUDv2 dataset (ODS F-score of 0.780).


Author(s):  
Qijie Zhao ◽  
Tao Sheng ◽  
Yongtao Wang ◽  
Zhi Tang ◽  
Ying Chen ◽  
...  

Feature pyramids are widely exploited by both the state-of-the-art one-stage object detectors (e.g., DSSD, RetinaNet, RefineDet) and the two-stage object detectors (e.g., Mask RCNN, DetNet) to alleviate the problem arising from scale variation across object instances. Although these object detectors with feature pyramids achieve encouraging results, they have some limitations due to that they only simply construct the feature pyramid according to the inherent multiscale, pyramidal architecture of the backbones which are originally designed for object classification task. Newly, in this work, we present Multi-Level Feature Pyramid Network (MLFPN) to construct more effective feature pyramids for detecting objects of different scales. First, we fuse multi-level features (i.e. multiple layers) extracted by backbone as the base feature. Second, we feed the base feature into a block of alternating joint Thinned U-shape Modules and Feature Fusion Modules and exploit the decoder layers of each Ushape module as the features for detecting objects. Finally, we gather up the decoder layers with equivalent scales (sizes) to construct a feature pyramid for object detection, in which every feature map consists of the layers (features) from multiple levels. To evaluate the effectiveness of the proposed MLFPN, we design and train a powerful end-to-end one-stage object detector we call M2Det by integrating it into the architecture of SSD, and achieve better detection performance than state-of-the-art one-stage detectors. Specifically, on MSCOCO benchmark, M2Det achieves AP of 41.0 at speed of 11.8 FPS with single-scale inference strategy and AP of 44.2 with multi-scale inference strategy, which are the new stateof-the-art results among one-stage detectors. The code will be made available on https://github.com/qijiezhao/M2Det.


2020 ◽  
Vol 34 (07) ◽  
pp. 12192-12199 ◽  
Author(s):  
Peisong Wang ◽  
Xiangyu He ◽  
Gang Li ◽  
Tianli Zhao ◽  
Jian Cheng

Binarization of feature representation is critical for Binarized Neural Networks (BNNs). Currently, sign function is the commonly used method for feature binarization. Although it works well on small datasets, the performance on ImageNet remains unsatisfied. Previous methods mainly focus on minimizing quantization error, improving the training strategies and decomposing each convolution layer into several binary convolution modules. However, whether sign is the only option for binarization has been largely overlooked. In this work, we propose the Sparsity-inducing Binarized Neural Network (Si-BNN), to quantize the activations to be either 0 or +1, which introduces sparsity into binary representation. We further introduce trainable thresholds into the backward function of binarization to guide the gradient propagation. Our method dramatically outperforms current state-of-the-arts, lowering the performance gap between full-precision networks and BNNs on mainstream architectures, achieving the new state-of-the-art on binarized AlexNet (Top-1 50.5%), ResNet-18 (Top-1 59.7%), and VGG-Net (Top-1 63.2%). At inference time, Si-BNN still enjoys the high efficiency of exclusive-not-or (xnor) operations.


Sign in / Sign up

Export Citation Format

Share Document