scholarly journals From Single Nanowires to Smart Systems: Different Ways to Assess Food Quality

2021 ◽  
Vol 5 (1) ◽  
pp. 29
Author(s):  
Matteo Tonezzer ◽  
Franco Biasioli ◽  
Flavia Gasperi

Recently, low-dimensional (1D, 2D) nanostructured materials have been attracting more and more interest as building blocks for innovative systems. Metal oxide nanowires are one of the most widely used materials for solid-state gas sensors, as they are simple to make, inexpensive, and sensitive to a wide range of gases and volatiles. Unfortunately, their broad sensitivity has a price to pay, which is very low selectivity. Fortunately, this flaw is not a problem for all applications. Where the boundary conditions are defined and “simple” (only the presence of a target gas is expected, without any interfering gases), a single traditional chemiresistor may be the best choice, while in cases where the variables are many, it is better to use an intelligent system. In this paper, we will show a resistive sensor based on a single SnO2 nanowire which, working at three temperatures (200, 250, and 300 °C), is able to detect tens of ppb of ammonia (30 ppb at 300 °C). The limit of detection (LoD) was calculated as 3 N/S, where N is the standard deviation of the sensor signal in air and S is the sensor sensitivity. We will show that the performance of this nanosensor is excellent and can be used in various applications, including agri-food quality monitoring. We will demonstrate that the SnO2 nanowire in a thermal gradient can act as a nano-electronic nose thanks to machine learning algorithms. The single nanowire-based sensor can estimate the total viable count with an error of 2.32% on mackerel fish samples stored at room temperature (25 °C) and in a fridge (4 °C). The integration of such a small (less than one square mm) and cheap device into the food supply chain would greatly reduce waste and the frequency of food poisoning.

Foods ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 62 ◽  
Author(s):  
Claudia Ruiz-Capillas ◽  
Ana Herrero

Today, food safety and quality are some of the main concerns of consumer and health agencies around the world. Our current lifestyle and market globalization have led to an increase in the number of people affected by food poisoning. Foodborne illness and food poisoning have different origins (bacteria, virus, parasites, mold, contaminants, etc.), and some cases of food poisoning can be traced back to chemical and natural toxins. One of the toxins targeted by the Food and Drug Administration (FDA) and European Food Safety Authority (EFSA) is the biogenic amine histamine. Biogenic amines (BAs) in food constitute a potential public health concern due to their physiological and toxicological effects. The consumption of foods containing high concentrations of biogenic amines has been associated with health hazards. In recent years there has been an increase in the number of food poisoning cases associated with BAs in food, mainly in relation to histamines in fish. We need to gain a better understanding of the origin of foodborne disease and how to control it if we expect to keep people from getting ill. Biogenic amines are found in varying concentrations in a wide range of foods (fish, cheese, meat, wine, beer, vegetables, etc.), and BA formation is influenced by different factors associated with the raw material making up food products, microorganisms, processing, and conservation conditions. Moreover, BAs are thermostable. Biogenic amines also play an important role as indicators of food quality and/or acceptability. Hence, BAs need to be controlled in order to ensure high levels of food quality and safety. All of these aspects will be addressed in this review.


Author(s):  
Michael Webster ◽  
Jutta Buschbom ◽  
Alex Hardisty ◽  
Andrew Bentley

Specimens have long been viewed as critical to research in the natural sciences because each specimen captures the phenotype (and often the genotype) of a particular individual at a particular point in space and time. In recent years there has been considerable focus on digitizing the many physical specimens currently in the world’s natural history research collections. As a result, a growing number of specimens are each now represented by their own “digital specimen”, that is, a findable, accessible, interoperable and re-usable (FAIR) digital representation of the physical specimen, which contains data about it. At the same time, there has been growing recognition that each digital specimen can be extended, and made more valuable for research, by linking it to data/samples derived from the curated physical specimen itself (e.g., computed tomography (CT) scan imagery, DNA sequences or tissue samples), directly related specimens or data about the organism's life (e.g., specimens of parasites collected from it, photos or recordings of the organism in life, immediate surrounding ecological community), and the wide range of associated specimen-independent data sets and model-based contextualisations (e.g., taxonomic information, conservation status, bioclimatological region, remote sensing images, environmental-climatological data, traditional knowledge, genome annotations). The resulting connected network of extended digital specimens will enable new research on a number of fronts, and indeed this has already begun. The new types of research enabled fall into four distinct but overlapping categories. First, because the digital specimen is a surrogate—acting on the Internet for a physical specimen in a natural science collection—it is amenable to analytical approaches that are simply not possible with physical specimens. For example, digital specimens can serve as training, validation and test sets for predictive process-based or machine learning algorithms, which are opening new doors of discovery and forecasting. Such sophisticated and powerful analytical approaches depend on FAIR, and on extended digital specimen data being as open as possible. These analytical approaches are derived from biodiversity monitoring outputs that are critically needed by the biodiversity community because they are central to conservation efforts at all levels of analysis, from genetics to species to ecosystem diversity. Second, linking specimens to closely associated specimens (potentially across multiple disparate collections) allows for the coordinated co-analysis of those specimens. For example, linking specimens of parasites/pathogens to specimens of the hosts from which they were collected, allows for a powerful new understanding of coevolution, including pathogen range expansion and shifts to new hosts. Similarly, linking specimens of pollinators, their food plants, and their predators can help untangle complex food webs and multi-trophic interactions. Third, linking derived data to their associated voucher specimens increases information richness, density, and robustness, thereby allowing for novel types of analyses, strengthening validation through linked independent data and thus, improving confidence levels and risk assessment. For example, digital representations of specimens, which incorporate e.g., images, CT scans, or vocalizations, may capture important information that otherwise is lost during preservation, such as coloration or behavior. In addition, permanently linking genetic and genomic data to the specimen of the individual from which they were derived—something that is currently done inconsistently—allows for detailed studies of the connections between genotype and phenotype. Furthermore, persistent links to physical specimens, of additional information and associated transactions, are the building blocks of documentation and preservation of chains of custody. The links will also facilitate data cleaning, updating, as well as maintenance of digital specimens and their derived and associated datasets, with ever-expanding research questions and applied uses materializing over time. The resulting high-quality data resources are needed for fact-based decision-making and forecasting based on monitoring, forensics and prediction workflows in conservation, sustainable management and policy-making. Finally, linking specimens to diverse but associated datasets allows for detailed, often transdisciplinary, studies of topics ranging from local adaptation, through the forces driving range expansion and contraction (critically important to our understanding of the consequences of climate change), and social vectors in disease transmission. A network of extended digital specimens will enable new and critically important research and applications in all of these categories, as well as science and uses that we cannot yet envision.


2021 ◽  
Author(s):  
Naman Bhoj ◽  
Ashutosh Tripathi

Abstract With the rise in human population and emergence of the medical crisis across the whole globe, it has become essential to develop smart systems to automate the process of identification of medical conditions and therefore provide timely aid to the patient. As the first step to develop such a system in this paper we aim to create a natural language based medical condition identification system. The user would provide a text review of how they feel with few other categorical features, based upon that our model would identify the potential medical condition the user is suffering from. We employed three different machine-learning algorithms and mitigated class imbalance in our dataset. Empirical results indicate that Random Forest is the best machine-learning algorithm among all the investigated models with an accuracy of 80.39%, whereas the accuracy of the AdaBoost model improved the highest with an absolute value of 7.77% after mitigating class imbalance


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Lin Lin ◽  
Xiufang Liang

The online English teaching system has certain requirements for the intelligent scoring system, and the most difficult stage of intelligent scoring in the English test is to score the English composition through the intelligent model. In order to improve the intelligence of English composition scoring, based on machine learning algorithms, this study combines intelligent image recognition technology to improve machine learning algorithms, and proposes an improved MSER-based character candidate region extraction algorithm and a convolutional neural network-based pseudo-character region filtering algorithm. In addition, in order to verify whether the algorithm model proposed in this paper meets the requirements of the group text, that is, to verify the feasibility of the algorithm, the performance of the model proposed in this study is analyzed through design experiments. Moreover, the basic conditions for composition scoring are input into the model as a constraint model. The research results show that the algorithm proposed in this paper has a certain practical effect, and it can be applied to the English assessment system and the online assessment system of the homework evaluation system algorithm system.


2012 ◽  
Vol 9 (1) ◽  
pp. 43 ◽  
Author(s):  
Hueyling Tan

Molecular self-assembly is ubiquitous in nature and has emerged as a new approach to produce new materials in chemistry, engineering, nanotechnology, polymer science and materials. Molecular self-assembly has been attracting increasing interest from the scientific community in recent years due to its importance in understanding biology and a variety of diseases at the molecular level. In the last few years, considerable advances have been made in the use ofpeptides as building blocks to produce biological materials for wide range of applications, including fabricating novel supra-molecular structures and scaffolding for tissue repair. The study ofbiological self-assembly systems represents a significant advancement in molecular engineering and is a rapidly growing scientific and engineering field that crosses the boundaries ofexisting disciplines. Many self-assembling systems are rangefrom bi- andtri-block copolymers to DNA structures as well as simple and complex proteins andpeptides. The ultimate goal is to harness molecular self-assembly such that design andcontrol ofbottom-up processes is achieved thereby enabling exploitation of structures developed at the meso- and macro-scopic scale for the purposes oflife and non-life science applications. Such aspirations can be achievedthrough understanding thefundamental principles behind the selforganisation and self-synthesis processes exhibited by biological systems.


2020 ◽  
Author(s):  
Aleksandra Balliu ◽  
Aaltje Roelofje Femmigje Strijker ◽  
Michael Oschmann ◽  
Monireh Pourghasemi Lati ◽  
Oscar Verho

<p>In this preprint, we present our initial results concerning a stereospecific Pd-catalyzed protocol for the C3 alkenylation and alkynylation of a proline derivative carrying the well utilized 8‑aminoquinoline directing group. Efficient C–H alkenylation was achieved with a wide range of vinyl iodides bearing different aliphatic, aromatic and heteroaromatic substituents, to furnish the corresponding C3 alkenylated products in good to high yields. In addition, we were able show that this protocol can also be used to install an alkynyl group into the pyrrolidine scaffold, when a TIPS-protected alkynyl bromide was used as the reaction partner. Furthermore, two different methods for the removal of the 8-aminoquinoline auxiliary are reported, which can enable access to both <i>cis</i>- and <i>trans</i>-configured carboxylic acid building blocks from the C–H alkenylation products.</p>


2018 ◽  
Author(s):  
Sherif Tawfik ◽  
Olexandr Isayev ◽  
Catherine Stampfl ◽  
Joseph Shapter ◽  
David Winkler ◽  
...  

Materials constructed from different van der Waals two-dimensional (2D) heterostructures offer a wide range of benefits, but these systems have been little studied because of their experimental and computational complextiy, and because of the very large number of possible combinations of 2D building blocks. The simulation of the interface between two different 2D materials is computationally challenging due to the lattice mismatch problem, which sometimes necessitates the creation of very large simulation cells for performing density-functional theory (DFT) calculations. Here we use a combination of DFT, linear regression and machine learning techniques in order to rapidly determine the interlayer distance between two different 2D heterostructures that are stacked in a bilayer heterostructure, as well as the band gap of the bilayer. Our work provides an excellent proof of concept by quickly and accurately predicting a structural property (the interlayer distance) and an electronic property (the band gap) for a large number of hybrid 2D materials. This work paves the way for rapid computational screening of the vast parameter space of van der Waals heterostructures to identify new hybrid materials with useful and interesting properties.


2019 ◽  
Vol 15 (3) ◽  
pp. 273-279
Author(s):  
Shweta G. Rangari ◽  
Nishikant A. Raut ◽  
Pradip W. Dhore

Background:The unstable and/or toxic degradation products may form due to degradation of drug which results into loss of therapeutic activity and lead to life threatening condition. Hence, it is important to establish the stability characteristics of drug in various conditions such as in temperature, light, oxidising agent and susceptibility across a wide range of pH values.Introduction:The aim of the proposed study was to develop simple, sensitive and economic stability indicating high performance thin layer chromatography (HPTLC) method for the quantification of Amoxapine in the presence of degradation products.Methods:Amoxapine and its degraded products were separated on precoated silica gel 60F254 TLC plates by using mobile phase comprising of methanol: toluene: ammonium acetate (6:3:1, v/v/v). The densitometric evaluation was carried out at 320 nm in reflectance/absorbance mode. The degradation products obtained as per ICH guidelines under acidic, basic and oxidative conditions have different Rf values 0.12, 0.26 and 0.6 indicating good resolution from each other and pure drug with Rf: 0.47. Amoxapine was found to be stable under neutral, thermal and photo conditions.Results:The method was validated as per ICH Q2 (R1) guidelines in terms of accuracy, precision, ruggedness, robustness and linearity. A good linear relationship between concentration and response (peak area and peak height) over the range of 80 ng/spot to 720 ng/spot was observed from regression analysis data showing correlation coefficient 0.991 and 0.994 for area and height, respectively. The limit of detection (LOD) and limit of quantitation (LOQ) for area were found to be 1.176 ng/mL and 3.565 ng/mL, whereas for height, 50.063 ng/mL and 151.707 ng/mL respectively.Conclusion:The statistical analysis confirmed the accuracy, precision and selectivity of the proposed method which can be effectively used for the analysis of amoxapine in the presence of degradation products.


2020 ◽  
Vol 09 ◽  
Author(s):  
Minita Ojha ◽  
R. K. Bansal

Background: During the last two decades, horizon of research in the field of Nitrogen Heterocyclic Carbenes (NHC) has widened remarkably. NHCs have emerged as ubiquitous species having applications in a broad range of fields, including organocatalysis and organometallic chemistry. The NHC-induced non-asymmetric catalysis has turned out to be a really fruitful area of research in recent years. Methods: By manipulating structural features and selecting appropriate substituent groups, it has been possible to control the kinetic and thermodynamic stability of a wide range of NHCs, which can be tolerant to a variety of functional groups and can be used under mild conditions. NHCs are produced by different methods, such as deprotonation of Nalkylhetrocyclic salt, transmetallation, decarboxylation and electrochemical reduction. Results: The NHCs have been used successfully as catalysts for a wide range of reactions making a large number of building blocks and other useful compounds accessible. Some of these reactions are: benzoin condensation, Stetter reaction, Michael reaction, esterification, activation of esters, activation of isocyanides, polymerization, different cycloaddition reactions, isomerization, etc. The present review includes all these examples published during the last 10 years, i.e. from 2010 till date. Conclusion: The NHCs have emerged as versatile and powerful organocatalysts in synthetic organic chemistry. They provide the synthetic strategy which does not burden the environment with metal pollutants and thus fit in the Green Chemistry.


2020 ◽  
Author(s):  
Sarah Delanys ◽  
Farah Benamara ◽  
Véronique Moriceau ◽  
François Olivier ◽  
Josiane Mothe

BACKGROUND With the advent of digital technology and specifically user generated contents in social media, new ways emerged for studying possible stigma of people in relation with mental health. Several pieces of work studied the discourse conveyed about psychiatric pathologies on Twitter considering mostly tweets in English and a limited number of psychiatric disorders terms. This paper proposes the first study to analyze the use of a wide range of psychiatric terms in tweets in French. OBJECTIVE Our aim is to study how generic, nosographic and therapeutic psychiatric terms are used on Twitter in French. More specifically, our study has three complementary goals: (1) to analyze the types of psychiatric word use namely medical, misuse, irrelevant, (2) to analyze the polarity conveyed in the tweets that use these terms (positive/negative/neural), and (3) to compare the frequency of these terms to those observed in related work (mainly in English ). METHODS Our study has been conducted on a corpus of tweets in French posted between 01/01/2016 to 12/31/2018 and collected using dedicated keywords. The corpus has been manually annotated by clinical psychiatrists following a multilayer annotation scheme that includes the type of word use and the opinion orientation of the tweet. Two analysis have been performed. First a qualitative analysis to measure the reliability of the produced manual annotation, then a quantitative analysis considering mainly term frequency in each layer and exploring the interactions between them. RESULTS One of the first result is a resource as an annotated dataset . The initial dataset is composed of 22,579 tweets in French containing at least one of the selected psychiatric terms. From this set, experts in psychiatry randomly annotated 3,040 tweets that corresponds to the resource resulting from our work. The second result is the analysis of the annotations; it shows that terms are misused in 45.3% of the tweets and that their associated polarity is negative in 86.2% of the cases. When considering the three types of term use, 59.5% of the tweets are associated to a negative polarity. Misused terms related to psychotic disorders (55.5%) are more frequent to those related to mood disorders (26.5%). CONCLUSIONS Some psychiatric terms are misused in the corpora we studied; which is consistent with the results reported in related work in other languages. Thanks to the great diversity of studied terms, this work highlighted a disparity in the representations and ways of using psychiatric terms. Moreover, our study is important to help psychiatrists to be aware of the term use in new communication media such as social networks which are widely used. This study has the huge advantage to be reproducible thanks to the framework and guidelines we produced; so that the study could be renewed in order to analyze the evolution of term usage. While the newly build dataset is a valuable resource for other analytical studies, it could also serve to train machine learning algorithms to automatically identify stigma in social media.


Sign in / Sign up

Export Citation Format

Share Document