scholarly journals Sulphur systems biology—making sense of omics data

2019 ◽  
Vol 70 (16) ◽  
pp. 4155-4170 ◽  
Author(s):  
Mutsumi Watanabe ◽  
Rainer Hoefgen

Abstract Systems biology approaches have been applied over the last two decades to study plant sulphur metabolism. These ‘sulphur-omics’ approaches have been developed in parallel with the advancing field of systems biology, which is characterized by permanent improvements of high-throughput methods to obtain system-wide data. The aim is to obtain a holistic view of sulphur metabolism and to generate models that allow predictions of metabolic and physiological responses. Besides known sulphur-responsive genes derived from previous studies, numerous genes have been identified in transcriptomics studies. This has not only increased our knowledge of sulphur metabolism but has also revealed links between metabolic processes, thus indicating a previously unexpected complex interconnectivity. The identification of response and control networks has been supported through metabolomics and proteomics studies. Due to the complex interlacing nature of biological processes, experimental validation using targeted or systems approaches is ongoing. There is still room for improvement in integrating the findings from studies of metabolomes, proteomes, and metabolic fluxes into a single unifying concept and to generate consistent models. We therefore suggest a joint effort of the sulphur research community to standardize data acquisition. Furthermore, focusing on a few different model plant systems would help overcome the problem of fragmented data, and would allow us to provide a standard data set against which future experiments can be designed and compared.

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1827
Author(s):  
Piotr Cofta ◽  
Kostas Karatzas ◽  
Cezary Orłowski

The growing popularity of inexpensive IoT (Internet of Things) sensor networks makes their uncertainty an important aspect of their adoption. The uncertainty determines their fitness for purpose, their perceived quality and the usefulness of information they provide. Nevertheless, neither the theory nor the industrial practice of uncertainty offer a coherent answer on how to address uncertainty of networks of this type and their components. The primary objective of this paper is to facilitate the discussion of what progress should be made regarding the theory and the practice of uncertainty of IoT sensor networks to satisfy current needs. This paper provides a structured overview of uncertainty, specifically focusing on IoT sensor networks. It positions IoT sensor networks as contrasted with professional measurement and control networks and presents their conceptual sociotechnical reference model. The reference model advises on the taxonomy of uncertainty proposed in this paper that demonstrates semantic differences between various views on uncertainty. This model also allows for identifying key challenges that should be addressed to improve the theory and practice of uncertainty in IoT sensor networks.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 37
Author(s):  
Shixun Wang ◽  
Qiang Chen

Boosting of the ensemble learning model has made great progress, but most of the methods are Boosting the single mode. For this reason, based on the simple multiclass enhancement framework that uses local similarity as a weak learner, it is extended to multimodal multiclass enhancement Boosting. First, based on the local similarity as a weak learner, the loss function is used to find the basic loss, and the logarithmic data points are binarized. Then, we find the optimal local similarity and find the corresponding loss. Compared with the basic loss, the smaller one is the best so far. Second, the local similarity of the two points is calculated, and then the loss is calculated by the local similarity of the two points. Finally, the text and image are retrieved from each other, and the correct rate of text and image retrieval is obtained, respectively. The experimental results show that the multimodal multi-class enhancement framework with local similarity as the weak learner is evaluated on the standard data set and compared with other most advanced methods, showing the experience proficiency of this method.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Adeline Su Lyn Ng ◽  
Juan Wang ◽  
Kwun Kei Ng ◽  
Joanna Su Xian Chong ◽  
Xing Qian ◽  
...  

Abstract Background Alzheimer’s disease (AD) and behavioral variant frontotemporal dementia (bvFTD) cause distinct atrophy and functional disruptions within two major intrinsic brain networks, namely the default network and the salience network, respectively. It remains unclear if inter-network relationships and whole-brain network topology are also altered and underpin cognitive and social–emotional functional deficits. Methods In total, 111 participants (50 AD, 14 bvFTD, and 47 age- and gender-matched healthy controls) underwent resting-state functional magnetic resonance imaging (fMRI) and neuropsychological assessments. Functional connectivity was derived among 144 brain regions of interest. Graph theoretical analysis was applied to characterize network integration, segregation, and module distinctiveness (degree centrality, nodal efficiency, within-module degree, and participation coefficient) in AD, bvFTD, and healthy participants. Group differences in graph theoretical measures and empirically derived network community structures, as well as the associations between these indices and cognitive performance and neuropsychiatric symptoms, were subject to general linear models, with age, gender, education, motion, and scanner type controlled. Results Our results suggested that AD had lower integration in the default and control networks, while bvFTD exhibited disrupted integration in the salience network. Interestingly, AD and bvFTD had the highest and lowest degree of integration in the thalamus, respectively. Such divergence in topological aberration was recapitulated in network segregation and module distinctiveness loss, with AD showing poorer modular structure between the default and control networks, and bvFTD having more fragmented modules in the salience network and subcortical regions. Importantly, aberrations in network topology were related to worse attention deficits and greater severity in neuropsychiatric symptoms across syndromes. Conclusions Our findings underscore the reciprocal relationships between the default, control, and salience networks that may account for the cognitive decline and neuropsychiatric symptoms in dementia.


2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Kate Highnam ◽  
Domenic Puzio ◽  
Song Luo ◽  
Nicholas R. Jennings

AbstractBotnets and malware continue to avoid detection by static rule engines when using domain generation algorithms (DGAs) for callouts to unique, dynamically generated web addresses. Common DGA detection techniques fail to reliably detect DGA variants that combine random dictionary words to create domain names that closely mirror legitimate domains. To combat this, we created a novel hybrid neural network, Bilbo the “bagging” model, that analyses domains and scores the likelihood they are generated by such algorithms and therefore are potentially malicious. Bilbo is the first parallel usage of a convolutional neural network (CNN) and a long short-term memory (LSTM) network for DGA detection. Our unique architecture is found to be the most consistent in performance in terms of AUC, $$F_1$$ F 1 score, and accuracy when generalising across different dictionary DGA classification tasks compared to current state-of-the-art deep learning architectures. We validate using reverse-engineered dictionary DGA domains and detail our real-time implementation strategy for scoring real-world network logs within a large enterprise. In 4 h of actual network traffic, the model discovered at least five potential command-and-control networks that commercial vendor tools did not flag.


2021 ◽  
Vol 25 ◽  
pp. 233121652110093
Author(s):  
Patrycja Książek ◽  
Adriana A. Zekveld ◽  
Dorothea Wendt ◽  
Lorenz Fiedler ◽  
Thomas Lunner ◽  
...  

In hearing research, pupillometry is an established method of studying listening effort. The focus of this study was to evaluate several pupil measures extracted from the Task-Evoked Pupil Responses (TEPRs) in speech-in-noise test. A range of analysis approaches was applied to extract these pupil measures, namely (a) pupil peak dilation (PPD); (b) mean pupil dilation (MPD); (c) index of pupillary activity; (d) growth curve analysis (GCA); and (e) principal component analysis (PCA). The effect of signal-to-noise ratio (SNR; Data Set A: –20 dB, –10 dB, +5 dB SNR) and luminance (Data Set B: 0.1 cd/m2, 360 cd/m2) on the TEPRs were investigated. Data Sets A and B were recorded during a speech-in-noise test and included TEPRs from 33 and 27 normal-hearing native Dutch speakers, respectively. The main results were as follows: (a) A significant effect of SNR was revealed for all pupil measures extracted in the time domain (PPD, MPD, GCA, PCA); (b) Two time series analysis approaches (GCA, PCA) provided modeled temporal profiles of TEPRs (GCA); and time windows spanning subtasks performed in a speech-in-noise test (PCA); and (c) All pupil measures revealed a significant effect of luminance. In conclusion, multiple pupil measures showed similar effects of SNR, suggesting that effort may be reflected in multiple aspects of TEPR. Moreover, a direct analysis of the pupil time course seems to provide a more holistic view of TEPRs, yet further research is needed to understand and interpret its measures. Further research is also required to find pupil measures less sensitive to changes in luminance.


2015 ◽  
Vol 15 (1) ◽  
pp. 253-272 ◽  
Author(s):  
M. R. Canagaratna ◽  
J. L. Jimenez ◽  
J. H. Kroll ◽  
Q. Chen ◽  
S. H. Kessler ◽  
...  

Abstract. Elemental compositions of organic aerosol (OA) particles provide useful constraints on OA sources, chemical evolution, and effects. The Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) is widely used to measure OA elemental composition. This study evaluates AMS measurements of atomic oxygen-to-carbon (O : C), hydrogen-to-carbon (H : C), and organic mass-to-organic carbon (OM : OC) ratios, and of carbon oxidation state (OS C) for a vastly expanded laboratory data set of multifunctional oxidized OA standards. For the expanded standard data set, the method introduced by Aiken et al. (2008), which uses experimentally measured ion intensities at all ions to determine elemental ratios (referred to here as "Aiken-Explicit"), reproduces known O : C and H : C ratio values within 20% (average absolute value of relative errors) and 12%, respectively. The more commonly used method, which uses empirically estimated H2O+ and CO+ ion intensities to avoid gas phase air interferences at these ions (referred to here as "Aiken-Ambient"), reproduces O : C and H : C of multifunctional oxidized species within 28 and 14% of known values. The values from the latter method are systematically biased low, however, with larger biases observed for alcohols and simple diacids. A detailed examination of the H2O+, CO+, and CO2+ fragments in the high-resolution mass spectra of the standard compounds indicates that the Aiken-Ambient method underestimates the CO+ and especially H2O+ produced from many oxidized species. Combined AMS–vacuum ultraviolet (VUV) ionization measurements indicate that these ions are produced by dehydration and decarboxylation on the AMS vaporizer (usually operated at 600 °C). Thermal decomposition is observed to be efficient at vaporizer temperatures down to 200 °C. These results are used together to develop an "Improved-Ambient" elemental analysis method for AMS spectra measured in air. The Improved-Ambient method uses specific ion fragments as markers to correct for molecular functionality-dependent systematic biases and reproduces known O : C (H : C) ratios of individual oxidized standards within 28% (13%) of the known molecular values. The error in Improved-Ambient O : C (H : C) values is smaller for theoretical standard mixtures of the oxidized organic standards, which are more representative of the complex mix of species present in ambient OA. For ambient OA, the Improved-Ambient method produces O : C (H : C) values that are 27% (11%) larger than previously published Aiken-Ambient values; a corresponding increase of 9% is observed for OM : OC values. These results imply that ambient OA has a higher relative oxygen content than previously estimated. The OS C values calculated for ambient OA by the two methods agree well, however (average relative difference of 0.06 OS C units). This indicates that OS C is a more robust metric of oxidation than O : C, likely since OS C is not affected by hydration or dehydration, either in the atmosphere or during analysis.


2011 ◽  
Vol 22 (4) ◽  
pp. 566-575 ◽  
Author(s):  
Luca Gerosa ◽  
Uwe Sauer
Keyword(s):  

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Avinash Jawade

Purpose This study aims to analyze the influence of firm characteristics in dividend payout in a concentrated ownership setting. Design/methodology/approach This study is probably the first to use the lasso technique for model selection and error prediction in the study of dividend payout in India. The lasso method comprises subsampling the available data set and performing reiterative regressions on those samples to generate the model with the best fit. This study incorporates four different ways of performing lasso treatment to get the best fit among them. Findings This study analyzes the influence of firm characteristics on dividend payout in the Indian context and asserts that firms with growth potential and earnings volatility do not hesitate to cut dividends. This study does not find evidence for signaling, agency cost and life cycle theories in a concentrated ownership setting. Earnings is the single most important factor to have a positive influence on dividend, while excessively leveraged firms are restrictive of dividend payout. Taxation has a prominent role in altering the way firms pay dividend. Research limitations/implications The recent changes in buyback taxation offer another opportunity to test the reactive behavior of firms. Also, given the disregard for traditional motivations, further research needs to be done to determine if dividend adjustments (on the lower side) help enhance firm value or not. Practical implications This study may help investors view dividends in a proper perspective. Firms give importance to investments over dividends and thus investors need not dwell on dividend changes if firms fulfill their growth potential. Social implications It lends perspective to investors about dividend changes and its importance. Originality/value The methodology used for analysis is absolutely original in the literature pertaining to dividend policy in the Indian context. The literature is abundant with theories advocating or opposing the eminence of dividend payout; however, this study takes a holistic view of all influential dividend determinants in literature to understand dividend payout.


Author(s):  
Guixiu Qiao ◽  
Brian A. Weiss

Over time, robots degrade because of age and wear, leading to decreased reliability and increasing potential for faults and failures; this negatively impacts robot availability. Economic factors motivate facilities and factories to improve maintenance operations to monitor robot degradation and detect faults and failures, especially to eliminate unexpected shutdowns. Since robot systems are complex, with sub-systems and components, it is challenging to determine these constituent elements’ specific influence on the overall system performance. The development of monitoring, diagnostic, and prognostic technologies (collectively known as Prognostics and Health Management (PHM)), can aid manufacturers in maintaining the performance of robot systems by providing intelligence to enhance maintenance and control strategies. This paper presents the strategy of integrating top level and component level PHM to detect robot performance degradation (including robot tool center accuracy degradation), supported by the development of a four-layer sensing and analysis structure. The top level PHM can quickly detect robot tool center accuracy degradation through advanced sensing and test methods developed at the National Institute of Standards and Technology (NIST). The component level PHM supports deep data analysis for root cause diagnostics and prognostics. A reference data set is collected and analyzed using the integration of top level PHM and component level PHM to understand the influence of temperature, speed, and payload on robot’s accuracy degradation.


Sign in / Sign up

Export Citation Format

Share Document