scholarly journals Controlling Liquid Slosh by Applying Optimal Operating-Speed-Dependent Motion Profiles

Robotics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 18 ◽  
Author(s):  
Clemens Troll ◽  
Sven Tietze ◽  
Jens-Peter Majschak

In this paper, an investigation is presented that demonstrates the application of a new approach for enabling the reduction of liquid slosh by implementing optimized motion profiles over a continuous range of operating speeds. Liquid slosh occurs in the packaging process of beverages. Starting by creating a dynamic process model, optimal control theory is applied for calculating optimal motion profiles that minimize residual vibration. Subsequently, the difficulty of operating speed dependency of the herewith synthesized motion profiles is examined. An approach in which the optimal motion profiles are consolidated into a characteristic map of motion specifications, which can be executed by a programmable logic controller in real time, is discussed. Eventually, the success of this novel approach is demonstrated by the comparison with state-of-the-art motion profiles and conventional motion implementation.

2020 ◽  
Author(s):  
Saniya Behzadpour ◽  
Torsten Mayer-Gürr ◽  
Andreas Kvas ◽  
Sandro Krauss ◽  
Sebastian Strasser ◽  
...  

<p>In GRACE-FO (Gravity Recovery and Climate Experiment Follow-on) mission, similar to its predecessor GRACE, the twin satellites are equipped with three-axis accelerometers, measuring the non-gravitational forces. After one month in orbit, the GRACE-D accelerometer data degraded and its measurements were replaced by synthetic accelerometer data, the so-called transplant data, officially generated by the Jet Propulsion Laboratory (JPL). The transplant data was derived from the GRACE-C accelerometer measurements, by applying a time and attitude corrections and adding model-based residual accelerations due to thruster firings on GRACE-D.</p><p>For the ITSG-Grace2018 GRACE-FO release, the gravity field recovery is based on the use of in-house Level-1B accelerometer data (ACT1B) using the provided Level-1A data products. In this work, we present a novel approach to recover the ACT1B data by (a) implementing the state-of-the-art non-gravitational force models and (b) applying additional force model corrections.</p><p>The preliminary results show the improved ACT1B data not only contributed to a noise reduction but also improved the estimates of the C20 and C30 coefficients. We show that the offset between SLR (Satellite Laser Ranging) and GRACE-FO derived C20 and C30 time series can be reduced remarkably by the use of the new accelerometer product, demonstrating the merit of this new approach.</p>


Author(s):  
Eric Rietzke ◽  
Carsten Maletzki ◽  
Ralph Bergmann ◽  
Norbert Kuhn

AbstractModeling and executing knowledge-intensive processes (KiPs) are challenging with state-of-the-art approaches, and the specific demands of KiPs are the subject of ongoing research. In this context, little attention has been paid to the ontology-driven combination of data-centric and semantic business process modeling, which finds additional motivation by enabling the division of labor between humans and artificial intelligence. Such approaches have characteristics that could allow support for KiPs based on the inferencing capabilities of reasoners. We confirm this as we show that reasoners can infer the executability of tasks based on a currently researched ontology- and data-driven business process model (ODD-BP model). Further support for KiPs by the proposed inference mechanism results from its ability to infer the relevance of tasks, depending on the extent to which their execution would contribute to process progress. Besides these contributions along with the execution perspective (start-to-end direction), we will also show how our approach can help to reach specific process goals by inferring the relevance of process elements regarding their support to achieve such goals (end-to-start direction). The elements with the most valuable process progress can be identified in the intersection of both, the execution and goal perspective. This paper will introduce this new approach and verifies its practicability with an evaluation of a KiP in the field of emergency call centers.


Author(s):  
Aina Niemetz ◽  
Mathias Preiner ◽  
Andrew Reynolds ◽  
Clark Barrett ◽  
Cesare Tinelli

AbstractThis paper presents a novel approach for quantifier instantiation in Satisfiability Modulo Theories (SMT) that leverages syntax-guided synthesis (SyGuS) to choose instantiation terms. It targets quantified constraints over background theories such as (non)linear integer, reals and floating-point arithmetic, bit-vectors, and their combinations. Unlike previous approaches for quantifier instantiation in these domains which rely on theory-specific strategies, the new approach can be applied to any (combined) theory, when provided with a grammar for instantiation terms for all sorts in the theory. We implement syntax-guided instantiation in the SMT solver CVC4, leveraging its support for enumerative SyGuS. Our experiments demonstrate the versatility of the approach, showing that it is competitive with or exceeds the performance of state-of-the-art solvers on a range of background theories.


2021 ◽  
pp. 109442812110029
Author(s):  
Tianjun Sun ◽  
Bo Zhang ◽  
Mengyang Cao ◽  
Fritz Drasgow

With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1962
Author(s):  
Enrico Buratto ◽  
Adriano Simonetto ◽  
Gianluca Agresti ◽  
Henrik Schäfer ◽  
Pietro Zanuttigh

In this work, we propose a novel approach for correcting multi-path interference (MPI) in Time-of-Flight (ToF) cameras by estimating the direct and global components of the incoming light. MPI is an error source linked to the multiple reflections of light inside a scene; each sensor pixel receives information coming from different light paths which generally leads to an overestimation of the depth. We introduce a novel deep learning approach, which estimates the structure of the time-dependent scene impulse response and from it recovers a depth image with a reduced amount of MPI. The model consists of two main blocks: a predictive model that learns a compact encoded representation of the backscattering vector from the noisy input data and a fixed backscattering model which translates the encoded representation into the high dimensional light response. Experimental results on real data show the effectiveness of the proposed approach, which reaches state-of-the-art performances.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Karin Wildi ◽  
Samantha Livingstone ◽  
Chiara Palmieri ◽  
Gianluigi LiBassi ◽  
Jacky Suen ◽  
...  

AbstractThe acute respiratory distress syndrome (ARDS) is a severe lung disorder with a high morbidity and mortality which affects all age groups. Despite active research with intense, ongoing attempts in developing pharmacological agents to treat ARDS, its mortality rate remains unaltered high and treatment is still only supportive. Over the years, there have been many attempts to identify meaningful subgroups likely to react differently to treatment among the heterogenous ARDS population, most of them unsuccessful. Only recently, analysis of large ARDS cohorts from randomized controlled trials have identified the presence of distinct biological subphenotypes among ARDS patients: a hypoinflammatory (or uninflamed; named P1) and a hyperinflammatory (or reactive; named P2) subphenotype have been proposed and corroborated with existing retrospective data. The hyperinflammatory subphenotyope was clearly associated with shock state, metabolic acidosis, and worse clinical outcomes. Core features of the respective subphenotypes were identified consistently in all assessed cohorts, independently of the studied population, the geographical location, the study design, or the analysis method. Additionally and clinically even more relevant treatment efficacies, as assessed retrospectively, appeared to be highly dependent on the respective subphenotype. This discovery launches a promising new approach to targeted medicine in ARDS. Even though it is now widely accepted that each ARDS subphenotype has distinct functional, biological, and mechanistic differences, there are crucial gaps in our knowledge, hindering the translation to bedside application. First of all, the underlying driving biological factors are still largely unknown, and secondly, there is currently no option for fast and easy identification of ARDS subphenotypes. This narrative review aims to summarize the evidence in biological subphenotyping in ARDS and tries to point out the current issues that will need addressing before translation of biological subohenotypes into clinical practice will be possible.


2021 ◽  
Vol 11 (9) ◽  
pp. 4241
Author(s):  
Jiahua Wu ◽  
Hyo Jong Lee

In bottom-up multi-person pose estimation, grouping joint candidates into the appropriately structured corresponding instance of a person is challenging. In this paper, a new bottom-up method, the Partitioned CenterPose (PCP) Network, is proposed to better cluster the detected joints. To achieve this goal, we propose a novel approach called Partition Pose Representation (PPR) which integrates the instance of a person and its body joints based on joint offset. PPR leverages information about the center of the human body and the offsets between that center point and the positions of the body’s joints to encode human poses accurately. To enhance the relationships between body joints, we divide the human body into five parts, and then, we generate a sub-PPR for each part. Based on this PPR, the PCP Network can detect people and their body joints simultaneously, then group all body joints according to joint offset. Moreover, an improved l1 loss is designed to more accurately measure joint offset. Using the COCO keypoints and CrowdPose datasets for testing, it was found that the performance of the proposed method is on par with that of existing state-of-the-art bottom-up methods in terms of accuracy and speed.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Sukruti Bansal ◽  
Silvia Nagy ◽  
Antonio Padilla ◽  
Ivonne Zavala

Abstract Recent progress in understanding de Sitter spacetime in supergravity and string theory has led to the development of a four dimensional supergravity with spontaneously broken supersymmetry allowing for de Sitter vacua, also called de Sitter supergravity. One approach makes use of constrained (nilpotent) superfields, while an alternative one couples supergravity to a locally supersymmetric generalization of the Volkov-Akulov goldstino action. These two approaches have been shown to give rise to the same 4D action. A novel approach to de Sitter vacua in supergravity involves the generalisation of unimodular gravity to supergravity using a super-Stückelberg mechanism. In this paper, we make a connection between this new approach and the previous two which are in the context of nilpotent superfields and the goldstino brane. We show that upon appropriate field redefinitions, the 4D actions match up to the cubic order in the fields. This points at the possible existence of a more general framework to obtain de Sitter spacetimes from high-energy theories.


2021 ◽  
pp. 209653112098296
Author(s):  
Yan Tang

Purpose: This study explores a novel approach to compiling life-oriented moral textbooks for elementary schools in China, specifically focusing on Morality and Law. Design/Approach/Methods: Adopting Aristotle’s Poetics as its theoretical perspective, this study illustrates and analyzes the mimetic approach used in compiling the life-oriented moral education textbook, Morality and Law. Findings: The mimetic approach involves imitating children's real activities, thoughts, and feelings in textbooks. The mimetic approach to compiling life-oriented moral textbooks comprises three strategies: constructing children's life events as building blocks for textbook compilation, designing an intricate textual device exposing the wholeness of children's life actions, and designing inward learning activities leading to children's inner worlds. Originality/Value: From the perspective of Aristotle's Poetics, the approach to compilation in Morality and Law can be defined as mimetic. And the compilation activity in the life-oriented moral education textbook also can be described as a processes of mimesis. So this article presents a new approach to compile moral education textbooks, and  an innovative way to understand the nature of one compiling activity.


2020 ◽  
pp. 1-16
Author(s):  
Meriem Khelifa ◽  
Dalila Boughaci ◽  
Esma Aïmeur

The Traveling Tournament Problem (TTP) is concerned with finding a double round-robin tournament schedule that minimizes the total distances traveled by the teams. It has attracted significant interest recently since a favorable TTP schedule can result in significant savings for the league. This paper proposes an original evolutionary algorithm for TTP. We first propose a quick and effective constructive algorithm to construct a Double Round Robin Tournament (DRRT) schedule with low travel cost. We then describe an enhanced genetic algorithm with a new crossover operator to improve the travel cost of the generated schedules. A new heuristic for ordering efficiently the scheduled rounds is also proposed. The latter leads to significant enhancement in the quality of the schedules. The overall method is evaluated on publicly available standard benchmarks and compared with other techniques for TTP and UTTP (Unconstrained Traveling Tournament Problem). The computational experiment shows that the proposed approach could build very good solutions comparable to other state-of-the-art approaches or better than the current best solutions on UTTP. Further, our method provides new valuable solutions to some unsolved UTTP instances and outperforms prior methods for all US National League (NL) instances.


Sign in / Sign up

Export Citation Format

Share Document