combine strategy
Recently Published Documents


TOTAL DOCUMENTS

7
(FIVE YEARS 6)

H-INDEX

1
(FIVE YEARS 1)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mengyuan Wang ◽  
Haiying Wang ◽  
Huiru Zheng ◽  
Dusan Uhrin ◽  
Richard J. Dewhurst ◽  
...  

AbstractAccurate quantification of volatile fatty acid (VFA) concentrations in rumen fluid are essential for research on rumen metabolism. The study comprehensively investigated the pros and cons of High-performance liquid chromatography (HPLC) and 1H Nuclear magnetic resonance (1H-NMR) analysis methods for rumen VFAs quantification. We also investigated the performance of several commonly used data pre-treatments for the two sets of data using correlation analysis, principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA). The molar proportion and reliability analysis demonstrated that the two approaches produce highly consistent VFA concentrations. In the pre-processing of NMR spectra, line broadening and shim correction may reduce estimated concentrations of metabolites. We observed differences in results using multiplet of different protons from one compound and identified “handle signals” that provided the most consistent concentrations. Different data pre-treatment strategies tested with both HPLC and NMR significantly affected the results of downstream data analysis. “Normalized by sum” pre-treatment can eliminate a large number of positive correlations between NMR-based VFA. A “Combine” strategy should be the first choice when calculating the correlation between metabolites or between samples. The PCA and PLS-DA suggest that except for “Normalize by sum”, pre-treatments should be used with caution.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2807
Author(s):  
Wentao Ma ◽  
Panfei Cai ◽  
Fengyuan Sun ◽  
Xiao Kou ◽  
Xiaofei Wang ◽  
...  

Classical adaptive filtering algorithms with a diffusion strategy under the mean square error (MSE) criterion can face difficulties in distributed estimation (DE) over networks in a complex noise environment, such as non-zero mean non-Gaussian noise, with the object of ensuring a robust performance. In order to overcome such limitations, this paper proposes a novel robust diffusion adaptive filtering algorithm, which is developed by using a variable center generalized maximum Correntropy criterion (GMCC-VC). Generalized Correntropy with a variable center is first defined by introducing a non-zero center to the original generalized Correntropy, which can be used as robust cost function, called GMCC-VC, for adaptive filtering algorithms. In order to improve the robustness of the traditional MSE-based DE algorithms, the GMCC-VC is used in a diffusion adaptive filter to design a novel robust DE method with the adapt-then-combine strategy. This can achieve outstanding steady-state performance under non-Gaussian noise environments because the GMCC-VC can match the distribution of the noise with that of non-zero mean non-Gaussian noise. The simulation results for distributed estimation under non-zero mean non-Gaussian noise cases demonstrate that the proposed diffusion GMCC-VC approach produces a more robustness and stable performance than some other comparable DE methods.


2021 ◽  
Author(s):  
Mengyuan Wang ◽  
Haiying Wang ◽  
Huiru Zheng ◽  
Dusan Uhrin ◽  
Richard Dewhurst ◽  
...  

Abstract Accurate quantification of volatile fatty acid (VFA) concentrations in rumen fluid are essential for research on rumen metabolism. This study comprehensively investigated the pros and cons of High-performance liquid chromatography (HPLC) and 1H Nuclear magnetic resonance (1H-NMR) analysis methods for rumen VFAs quantification. We also investigated the performance of several commonly used data pre-treatments for the two sets of data using correlation analysis, principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA). The molar proportion and reliability analysis demonstrated that the two approaches produce highly consistent VFA concentrations. In the pre-processing of NMR spectra, line broadening and shim correction may reduce estimated concentrations of metabolites. We identified differences in results using different spectral clusters and the chemical shifts of the spectral clusters as quantitative references for VFAs were provided. Different data pre-treatment strategies tested with both HPLC and NMR significantly affected the results of downstream data analysis. “Normalized by sum” pre-treatment can eliminate a large number of positive correlations between NMR-based VFA. A “Combine” strategy should be the first choice when calculating the correlation between metabolites or between samples. The PCA and PLS-DA suggest that except for “Normalize by sum”, pre-treatments should be used with caution.


2020 ◽  
Author(s):  
Yang Hu ◽  
Zhiwu Zhang

AbstractGrain characteristics, including kernel length, kernel width, and thousand kernel weight, are critical component traits for grain yield. Manual measurements and counting are expensive, forming the bottleneck for dissecting the genetic architecture of these traits toward ultimate yield improvement. High-throughput phenotyping methods have been developed by analyzing images of kernels. However, segmenting kernels from the image background and noise artifacts or from other kernels positioned in close proximity remain challenges. In this study, we developed a software package, named GridFree, to overcome these challenges. GridFree uses an unsupervised machine learning approach, K-Means, to segment kernels from the background by using principal component analysis on both raw image channels and their color indices. GridFree incorporates users’ experiences as a dynamic criterion to set thresholds for a divide-and-combine strategy that effectively segments adjacent kernels. When adjacent multiple kernels are incorrectly segmented as a single object, they form an outlier on the distribution plot of kernel area, length, and width. GridFree uses the dynamic threshold settings for splitting and merging. In addition to counting, GridFree measures kernel length, width, and area with the option of scaling with a reference object. Evaluations against existing software programs demonstrated that GridFree had the smallest error on counting seeds for multiple crops, including alfalfa, canola, lentil, wheat, chickpea, and soybean. GridFree was implemented in Python with a friendly graphical user interface to allow users easily visualize the outcomes and make decisions, which ultimately eliminates time-consuming and repetitive manual labor. GridFree is freely available at the GridFree website (https://zzlab.net/GridFree).


Author(s):  
Pei Yang ◽  
Qi Tan ◽  
Jieping Ye ◽  
Hanghang Tong ◽  
Jingrui He

In this paper, we propose a deep multi-Task learning model based on Adversarial-and-COoperative nets (TACO). The goal is to use an adversarial-and-cooperative strategy to decouple the task-common and task-specific knowledge, facilitating the fine-grained knowledge sharing among tasks. TACO accommodates multiple game players, i.e., feature extractors, domain discriminator, and tri-classifiers. They play the MinMax games adversarially and cooperatively to distill the task-common and task-specific features, while respecting their discriminative structures. Moreover, it adopts a divide-and-combine strategy to leverage the decoupled multi-view information to further improve the generalization performance of the model. The experimental results show that our proposed method significantly outperforms the state-of-the-art algorithms on the benchmark datasets in both multi-task learning and semi-supervised domain adaptation scenarios.


Sign in / Sign up

Export Citation Format

Share Document