Antioxidant Profiling of Ginger via Reaction Flow Chromatography

2021 ◽  
Vol 16 (9) ◽  
pp. 1934578X2110352
Author(s):  
Xian Zhou ◽  
Declan Power ◽  
Andrew Jones ◽  
Agustín Acquaviva ◽  
Gary R. Dennis ◽  
...  

Reaction flow (RF) chromatography is a powerful and efficient approach that utilizes conventional high-performance liquid chromatography (HPLC)–ultraviolet (UV)–visible detection. This technique exploits a novel column end-fitting and an extra HPLC pump that delivers a reagent specific for selective detection, in particular the antioxidant profiling of natural products. This study employed RF for the first time to identify antioxidants in a commercial ginger sample. This demonstrated the previously validated assay's ease and power to extract information about the natural product's antioxidant properties. Due to the simplicity involved with data analysis and peak matching process, the following information was revealed between the chemical and antioxidant profiles: three of the strongest antioxidant activity peaks in the ginger sample (593 nm) did not correlate with the three most abundant chemical profile peaks (UV absorbance at 254 and 280 nm); the ratio of seven antioxidant peaks may be potentially used for food authenticity purposes, and future research should target these peaks for the early discovery of novel antioxidants sourced in ginger. Utilization of this previously validated assay provided the resolution of numerous peaks in the ginger extract and information associated with their antioxidant attributes and chemical abundance. This approach is more informative than total antioxidant assays that lack compound specificity information. Furthermore, it is superior to mass spectrometric (MS) assays that cannot evaluate each compound's antioxidant strength, and does not involve the expense involved in the acquisition and maintenance of the MS detection hardware, and does not require the high level of expertise needed to conduct the MS data analysis.

2007 ◽  
Vol 15 (1) ◽  
pp. 45-65
Author(s):  
Hans P. Zima

When the first specification of the FORTRAN language was released in 1956, the goal was to provide an "automatic programming system" that would enhance the economy of programming by replacing assembly language with a notation closer to the domain of scientific programming. A key issue in this context, explicitly recognized by the authors of the language, was the requirement to produce efficient object programs that could compete with their hand-coded counterparts. More than 50 years later, a similar situation exists with respect to finding the right programming paradigm for high performance computing systems. FORTRAN, as the traditional language for scientific programming, has played a major role in the quest for high-productivity programming languages that satisfy very strict performance constraints. This paper focuses on high-level support for locality awareness, one of the most important requirements in this context. The discussion centers on the High Performance Fortran (HPF) family of languages, and their influence on current language developments for peta-scale computing. HPF is a data-parallel language that was designed to provide the user with a high-level interface for programming scientific applications, while delegating to the compiler the task of generating an explicitly parallel message-passing program. We outline developments that led to HPF, explain its major features, identify a set of weaknesses, and discuss subsequent languages that address these problems. The final part of the paper deals with Chapel, a modern object-oriented language developed in the High Productivity Computing Systems (HPCS) program sponsored by DARPA. A salient property of Chapel is its general framework for the support of user-defined distributions, which is related in many ways to ideas first described in Vienna Fortran. This framework is general enough to allow a concise specification of sparse data distributions. The paper concludes with an outlook to future research in this area.


2009 ◽  
Vol 18 (08) ◽  
pp. 1467-1480
Author(s):  
JUHONG YANG ◽  
YUKI SAITO ◽  
QIWEI SHI ◽  
JIANTING CAO ◽  
TOSHIHISA TANAKA ◽  
...  

Magnetoencephalography (MEG) is a powerful and non-invasive technique for measuring human brain activity with a high temporal resolution. The motivation for studying MEG data analysis is to extract the essential features from real-world measured data and represent them corresponding to the human brain functions. This usually depends on how to reduce a high level noise from the measurement. In this paper, a novel multistage MEG data analysis method based on the empirical mode decomposition (EMD) and independent component analysis (ICA) approaches is proposed for the feature extraction. Moreover, EMD and ICA algorithms are investigated for analyzing the MEG single-trial data which is recorded from the experiment of phantom. The analyzed results are presented to illustrate the effectiveness and high performance both in high level noise reduction by EMD associated with ICA approach and source localization by equivalent current dipole fitting method.


2004 ◽  
Vol 74 (2) ◽  
pp. 153-160 ◽  
Author(s):  
Molldrem ◽  
Tanumihardjo

Lutein is a carotenoid that may be involved in the prevention of macular degeneration and is available as supplements. Cranberries are a potential 'functional food' due to anti-adhesion and antioxidant properties. This study was designed to determine the bioavailability of lutein supplements in Mongolian gerbils, as prior studies have focused on beta-carotene, and to investigate any interactions between a lutein supplement and a diet containing cranberries. Gerbils (n = 28) were divided into treatment groups: lutein + cranberry; lutein + control; cottonseed oil + cranberry; and cottonseed oil + control. The lutein supplement (50 mug lutein in oil) was delivered orally for 14 days, and then blood, livers, and eyes were collected. Samples were analyzed by high-performance liquid chromatography (HPLC) and total antioxidant status was determined. Serum and liver were analyzed for lutein, retinol, and alpha-tocopherol. Serum lutein concentrations were extremely low in all four groups. Serum total antioxidants did not differ (p > 0.2) among diet groups. Serum retinol concentrations were significantly lower in the cranberry groups (p = 0.0024). In conclusion, gerbils are able to thrive on a high cranberry diet. However, this study showed that lutein, as a daily supplement in oil, is not bioavailable in Mongolian gerbils.


2017 ◽  
Vol 2017 ◽  
pp. 1-10 ◽  
Author(s):  
Zübeyir Huyut ◽  
Şükrü Beydemir ◽  
İlhami Gülçin

Phenolic compounds and flavonoids are known by their antioxidant properties and one of the most important sources for humans is the diet. Due to the harmful effects of synthetic antioxidants such as BHA and BHT, natural novel antioxidants have become the focus of attention for protecting foods and beverages and reducing oxidative stressin vivo. In the current study, we investigated the total antioxidant, metal chelating, Fe3+and Cu2+reduction, and free radical scavenging activities of some phenolic and flavonoid compounds including malvin, oenin, ID-8, silychristin, callistephin, pelargonin, 3,4-dihydroxy-5-methoxybenzoic acid, 2,4,6-trihydroxybenzaldehyde, and arachidonoyl dopamine. The antioxidant properties of these compounds at different concentrations (10–30 μg/mL) were compared with those of reference antioxidants such as BHA, BHT,α-tocopherol, and trolox. Each substance showed dose-dependent antioxidant activity. Furthermore, oenin, malvin, arachidonoyl dopamine, callistephin, silychristin, and 3,4-dihydroxy-5-methoxybenzoic acid exhibited more effective antioxidant activity than that observed for the reference antioxidants. These results suggest that these novel compounds may function to protect foods and medicines and to reduce oxidative stressin vivo.


2014 ◽  
Vol 70 (a1) ◽  
pp. C340-C340
Author(s):  
Olof Svensson ◽  
Sandor Brockhauser ◽  
Matthew Bowler ◽  
Max Nanao ◽  
Matias Guijarro ◽  
...  

The high performance of modern synchrotron facilities means there is an increasing reliance on automated data analysis and collection methods. The EMBL and ESRF are actively involved in designing and implementing such automated methods. However, as these methods are evolving there is also a need to continually integrate newer and more sophisticated data analysis and collection protocols with experimental control. This integration often poses a challenge, requiring a high level software environment to automatically coordinate beamline control with data acquisition and analysis. This is why we have extended the Eclipse RCP version of the workflow tool Passerelle into a user friendly GUI for experiment design by scientists and programmers [1], which is now part of the Data Analysis WorkbeNch (DAWN) collaboration (http://www.dawnsci.org). The execution of several complex workflows using this technology are now fully integrated in the new version of MxCuBE [2] and deployed on the ESRF macromolecular crystallography beamlines. Here, I will present their current implementation and the data quality improvements that can be achieved. In particular we have developed automated crystal re-orientation workflows that can improve the success of ab initio phasing experiments and help mitigate against radiation damage effect [3]. Other protocols implemented include a 3D diffraction based centring routine, a dehydration protocol and the automated measurement of a crystals radiation sensitivity. Lastly I will present our future plans for other new advanced diffraction based workflow routines, including automated crystal screening and data collection protocols.


Methodology ◽  
2017 ◽  
Vol 13 (1) ◽  
pp. 9-22 ◽  
Author(s):  
Pablo Livacic-Rojas ◽  
Guillermo Vallejo ◽  
Paula Fernández ◽  
Ellián Tuero-Herrero

Abstract. Low precision of the inferences of data analyzed with univariate or multivariate models of the Analysis of Variance (ANOVA) in repeated-measures design is associated to the absence of normality distribution of data, nonspherical covariance structures and free variation of the variance and covariance, the lack of knowledge of the error structure underlying the data, and the wrong choice of covariance structure from different selectors. In this study, levels of statistical power presented the Modified Brown Forsythe (MBF) and two procedures with the Mixed-Model Approaches (the Akaike’s Criterion, the Correctly Identified Model [CIM]) are compared. The data were analyzed using Monte Carlo simulation method with the statistical package SAS 9.2, a split-plot design, and considering six manipulated variables. The results show that the procedures exhibit high statistical power levels for within and interactional effects, and moderate and low levels for the between-groups effects under the different conditions analyzed. For the latter, only the Modified Brown Forsythe shows high level of power mainly for groups with 30 cases and Unstructured (UN) and Autoregressive Heterogeneity (ARH) matrices. For this reason, we recommend using this procedure since it exhibits higher levels of power for all effects and does not require a matrix type that underlies the structure of the data. Future research needs to be done in order to compare the power with corrected selectors using single-level and multilevel designs for fixed and random effects.


2020 ◽  
Author(s):  
James McDonagh ◽  
William Swope ◽  
Richard L. Anderson ◽  
Michael Johnston ◽  
David J. Bray

Digitization offers significant opportunities for the formulated product industry to transform the way it works and develop new methods of business. R&D is one area of operation that is challenging to take advantage of these technologies due to its high level of domain specialisation and creativity but the benefits could be significant. Recent developments of base level technologies such as artificial intelligence (AI)/machine learning (ML), robotics and high performance computing (HPC), to name a few, present disruptive and transformative technologies which could offer new insights, discovery methods and enhanced chemical control when combined in a digital ecosystem of connectivity, distributive services and decentralisation. At the fundamental level, research in these technologies has shown that new physical and chemical insights can be gained, which in turn can augment experimental R&D approaches through physics-based chemical simulation, data driven models and hybrid approaches. In all of these cases, high quality data is required to build and validate models in addition to the skills and expertise to exploit such methods. In this article we give an overview of some of the digital technology demonstrators we have developed for formulated product R&D. We discuss the challenges in building and deploying these demonstrators.<br>


2020 ◽  
Author(s):  
Sina Faizollahzadeh Ardabili ◽  
Amir Mosavi ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
Annamaria R. Varkonyi-Koczy ◽  
...  

Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models.


Sign in / Sign up

Export Citation Format

Share Document