Application of simulation technologies in the analysis of granular material behaviour during transport and storage

Author(s):  
P Chapelle ◽  
N Christakis ◽  
J Wang ◽  
N Strusevich ◽  
M. K. Patel ◽  
...  

Problems in the preservation of the quality of granular material products are complex and arise from a series of sources during transport and storage. In either designing a new plant or, more likely, analysing problems that give rise to product quality degradation in existing operations, practical measurement and simulation tools and technologies are required to support the process engineer. These technologies are required to help in both identifying the source of such problems and then designing them out. As part of a major research programme on quality in particulate manufacturing computational models have been developed for segregation in silos, degradation in pneumatic conveyors, and the development of caking during storage, which use where possible, micro-mechanical relationships to characterize the behaviour of granular materials. The objective of the work presented here is to demonstrate the use of these computational models of unit processes involved in the analysis of large-scale processes involving the handling of granular materials. This paper presents a set of simulations of a complete large-scale granular materials handling operation, involving the discharge of the materials from a silo, its transport through a dilute-phase pneumatic conveyor, and the material storage in a big bag under varying environmental temperature and humidity conditions. Conclusions are drawn on the capability of the computational models to represent key granular processes, including particle size segregation, degradation, and moisture migration caking.

Processes ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 1568
Author(s):  
Rebecca R. Milczarek ◽  
Carl W. Olsen ◽  
Ivana Sedej

Watermelon (Citrullus lanatus) juice is known for its refreshing flavor, but its high perishability limits its availability throughout the year. Watermelon juice concentrate has extended shelf-life and lower transportation and storage costs, but the conventional thermal evaporation process for concentrating juice degrades the nutritional components and sensory quality of the product. Thus, in this work, a large-scale, non-thermal forward osmosis (FO) process was used to concentrate fresh watermelon juice up to 65°Brix. The FO concentrate was compared to thermal concentrate and fresh juices, and to commercially available refrigerated watermelon juices, in terms of lycopene and citrulline content, total soluble phenolics, antioxidant activity, and sensory properties. The FO concentrate had statistically similar (p < 0.05) levels of all the nutrients of interest except antioxidant activity, when compared to the thermal concentrate. The reconstituted FO concentrate maintained the same antioxidant activity as the raw source juice, which was 45% higher than that of the reconstituted thermal concentrate. Sensory results showed that reconstituted FO concentrate resulted in highly liked juice, and it outperformed the reconstituted thermal concentrate in the sensory hedonic rating. This work demonstrates the possibility to produce a high-quality watermelon juice concentrate by forward osmosis.


Author(s):  
Rafael Ferreira da Silva ◽  
Tristan Glatard ◽  
Frédéric Desprez

Science gateways, such as the Virtual Imaging Platform (VIP), enable transparent access to distributed computing and storage resources for scientific computations. However, their large scale and the number of middleware systems involved in these gateways lead to many errors and faults. This chapter addresses the autonomic management of workflow executions on science gateways in an online and non-clairvoyant environment, where the platform workload, task costs, and resource characteristics are unknown and not stationary. The chapter describes a general self-management process based on the MAPE-K loop (Monitoring, Analysis, Planning, Execution, and Knowledge) to cope with operational incidents of workflow executions. Then, this process is applied to handle late task executions, task granularities, and unfairness among workflow executions. Experimental results show how the approach achieves a fair quality of service by using control loops that constantly perform online monitoring, analysis, and execution of a set of curative actions.


Author(s):  
N. Broers ◽  
N.A. Busch

AbstractMany photographs of real-life scenes are very consistently remembered or forgotten by most people, making these images intrinsically memorable or forgettable. Although machine vision algorithms can predict a given image’s memorability very well, nothing is known about the subjective quality of these memories: are memorable images recognized based on strong feelings of familiarity or on recollection of episodic details? We tested people’s recognition memory for memorable and forgettable scenes selected from image memorability databases, which contain memorability scores for each image, based on large-scale recognition memory experiments. Specifically, we tested the effect of intrinsic memorability on recollection and familiarity using cognitive computational models based on receiver operating characteristics (ROCs; Experiment 1 and 1) and on remember/know (R/K) judgments (Experiment 1). The ROC data of Experiment 1 indicated that image memorability boosted memory strength, but did not find a specific effect on recollection or familiarity. By contrast, ROC data from Experiment 1, which was designed to facilitate encoding and, in turn, recollection, found evidence for a specific effect of image memorability on recollection. Moreover, R/K judgments showed that, on average, memorability boosts recollection rather than familiarity. However, we also found a large degree of variability in these judgments across individual images: some images actually achieved high recognition rates by exclusively boosting familiarity rather than recollection. Together, these results show that current machine vision algorithms that can predict an image’s intrinsic memorability in terms of hit rates fall short of describing the subjective quality of human memories.


Author(s):  
Tong Wang ◽  
Ping Chen ◽  
Boyang Li

An important and difficult challenge in building computational models for narratives is the automatic evaluation of narrative quality. Quality evaluation connects narrative understanding and generation as generation systems need to evaluate their own products. To circumvent difficulties in acquiring annotations, we employ upvotes in social media as an approximate measure for story quality. We collected 54,484 answers from a crowd-powered question-and-answer website, Quora, and then used active learning to build a classifier that labeled 28,320 answers as stories. To predict the number of upvotes without the use of social network features, we create neural networks that model textual regions and the interdependence among regions, which serve as strong benchmarks for future research. To our best knowledge, this is the first large-scale study for automatic evaluation of narrative quality.


2021 ◽  
Vol 20 (5) ◽  
pp. 2958
Author(s):  
M. S. Pokrovskaya ◽  
A. L. Borisova ◽  
V. A. Metelskaya ◽  
I. A. Efimova ◽  
Yu. V. Doludin ◽  
...  

The success and quality of large-scale epidemiological studies depends entirely on biomaterial quality. Therefore, when arranging the third Epidemiology of Cardiovascular Diseases and their Risk Factors in Regions of Russian Federation (ESSE-RF-3) study, increased attention was paid to specifics of collection, processing and further transportation of biological samples and related clinical and anthropometric data of participants from regional collection centers to Biobank.Aim. To develop a methodology for collection of high-quality biomaterials within the large-scale epidemiological study, involving the sampling, processing, freezing of blood and its derivatives (serum, plasma) in the regions, followed by transportation and storage of obtained biomaterial in the Biobank of National Medical Research Center for Therapy and Preventive Medicine (Moscow).Material and methods. To conduct the ESSE-RF-3 study, a design was developed, according to which the collection of venous blood samples in a total volume of 29,5 ml from each participant is planned in all participating regions in order to obtain and store samples of whole blood, serum and two types of plasma.Results. On the basis of international biobanking standards, ethical norms, experience from ESSE-RF and ESSE-RF-2, and literature data, a protocol for biobanking of blood and its derivatives was developed. The type and number of serum and plasma aliquots obtained, the required standard technical means and consumables, as well as logistic biomaterial requirements were determined. Training programs for regional participants were developed. By the beginning of August 2021, 180 thousand samples of whole blood, serum and plasma from more than 23 thousand participants from 28 Russian regions were collected, processed and stored.Conclusion. The presented work made it possible to assess and confirm the compliance of developed biobanking protocol with quality requirements. However, due to the coronavirus disease 2019 pandemic, by August 2021, the Biobank did not reach the maximum effectiveness predicted for the ESSE-RF-3 project.


2019 ◽  
Author(s):  
Nico Broers ◽  
Niko Busch

Many photographs of real-life scenes are very consistently remembered or forgotten by most people, making these images intrinsically memorable or forgettable. Although machine vision algorithms can predict a given image’s memorability very well, nothing is known about the subjective quality of these memories: are memorable images recognized based on strong feelings of familiarity or on recollection of episodic details? We tested people’s recognition memory for memorable and forgettable scenes selected from image memorability databases, which contain memorability scores for each image, based on large-scale recognition memory experiments. Specifically, we tested the effect of intrinsic memorability on recollection and familiarity using cognitive computational models based on Receiver Operating Characteristics (ROCs; Experiment 1 and 2) and on remember/know (R/K) judgments (Experiment 2). The ROC data of experiment 1 indicated that image memorability boosted memory strength, but did not find a specific effect on recollection or familiarity. By contrast, ROC data from Experiment 2, which was designed to facilitate encoding and, in turn, recollection, found more evidence for a specific effect of image memorability on recollection. Moreover, R/K judgments showed that, on average, memorability boosts recollection rather than familiarity. However, we also found a large degree of variability in these ratings across individual images: some images actually achieved high recognition rates by exclusively boosting familiarity rather than recollection. Together, these results show that current machine vision algorithms that can predict an image’s intrinsic memorability in terms of hit rates fall short of describing the subjective quality of human memories.


2005 ◽  
Vol 2005 (3) ◽  
pp. 291-296 ◽  
Author(s):  
Claire Mulot ◽  
Isabelle Stücker ◽  
Jacqueline Clavel ◽  
Philippe Beaune ◽  
Marie-Anne Loriot

Alternative sources such as buccal cells have already been tested for genetic studies and epidemiological investigations. Thirty-seven volunteers participated in this study to compare cytology brushes, mouthwash, and treated cards for DNA collection. Quantity and quality of DNA and cost and feasibility were assessed. The mean DNA yield at 260 nm was found to be3.5,4, and2.6μg for cytobrushes, mouthwashes, and treated cards, respectively. A second quantification technique by fluorescence showed differences in the DNA yield with1.1and5.2μg for cytobrushes and mouthwash, respectively. All buccal samples allowed isolation of DNA suitable for polymerase chain reaction. According to the procedure of sample collection, the yield and purity of collected DNA, and storage conditions, the use of cytobrush appears to be the more appropriate method for DNA collection. This protocol has been validated and is currently applied in three large-scale multicentric studies including adults or children.


2021 ◽  
Vol 46 (4) ◽  
pp. 423-436
Author(s):  
Pawel Wojciechowski ◽  
Karol Krause ◽  
Piotr Lukasiak ◽  
Jacek Blazewicz

Abstract Implementing a large genomic project is a demanding task, also from the computer science point of view. Besides collecting many genome samples and sequencing them, there is processing of a huge amount of data at every stage of their production and analysis. Efficient transfer and storage of the data is also an important issue. During the execution of such a project, there is a need to maintain work standards and control quality of the results, which can be difficult if a part of the work is carried out externally. Here, we describe our experience with such data quality analysis on a number of levels - from an obvious check of the quality of the results obtained, to examining consistency of the data at various stages of their processing, to verifying, as far as possible, their compatibility with the data describing the sample.


Author(s):  
A. Babirad

Cerebrovascular diseases are a problem of the world today, and according to the forecast, the problem of the near future arises. The main risk factors for the development of ischemic disorders of the cerebral circulation include oblique and aging, arterial hypertension, smoking, diabetes mellitus and heart disease. An effective strategy for the prevention of cerebrovascular events is based on the implementation of large-scale risk control measures, including the use of antiagregant and anticoagulant therapy, invasive interventions such as atheromectomy, angioplasty and stenting. In this connection, the efforts of neurologists, cardiologists, angiosurgery, endocrinologists and other specialists are the basis for achieving an acceptable clinical outcome. A review of the SF-36 method for assessing the quality of life in patients with the effects of transient ischemic stroke is presented. The assessment of quality of life is recognized in world medical practice and research, an indicator that is also used to assess the quality of the health system and in general sociological research.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


Sign in / Sign up

Export Citation Format

Share Document