scholarly journals Finding the Perfect Match: Different Heavy-Duty Mobile Applications Call for Different Actuators

Proceedings ◽  
2020 ◽  
Vol 64 (1) ◽  
pp. 22
Author(s):  
David Fassbender ◽  
Tatina Minav

For the longest time, valve-controlled, centralized hydraulic systems have been the state-of-the-art technology to actuate heavy-duty mobile machine (HDMM) implements. Due to the typically low energy efficiency of those systems, a high number of promising, more-efficient actuator concepts has been proposed by academia as well as industry over the last decades as potential replacements for valve control—e.g., independent metering, displacement control, different types of electro-hydraulic actuators (EHAs), electro-mechanic actuators, or hydraulic transformers. This paper takes a closer look on specific HDMM applications for these actuator concepts to figure out where which novel concept can be a better alternative to conventional actuator concepts, and where novel concepts might fail to improve. For this purpose, a novel evaluation algorithm for actuator–HDMM matches is developed based on problem aspects that can indicate an unsuitable actuator–HDMM match. To demonstrate the functionality of the match evaluation algorithm, four actuator concepts and four HDMM types are analyzed and rated in order to form 16 potential actuator–HDMM matches that can be evaluated by the novel algorithm. The four actuator concepts comprise a conventional valve-controlled concept and three different types of EHAs. The HDMM types are excavator, wheel loader, backhoe, and telehandler. Finally, the evaluation of the 16 matches results in 16 mismatch values, of which the lowest indicates the “perfect match”. Low mismatch values could be found in general for EHAs in combination with most HDMMs but also for a valve-controlled actuator concept in combination with a backhoe. Furthermore, an analysis of the concept limitations with suggestions for improvement is included.

Author(s):  
E. Kakaras ◽  
A. Koumanakos ◽  
A. Doukelis ◽  
D. Giannakopoulos ◽  
Ch. Hatzilau ◽  
...  

Scope of the work presented is to examine and evaluate the state of the art in technological concepts towards the capture and sequestration of CO2 from coal-fired power plants. The discussion is based on the evaluation of a novel concept dealing with the carbonation-calcination process of lime for CO2 capture from coal fired power plants compared to integration of CO2 capture in an Integrated Gasification Combined Cycle power plant. In the novel concept, coal is gasified with steam in the presence of lime. Lime absorbs the CO2 released from the coal, producing limestone. The produced gas can be a low-carbon or even zero-carbon (H2) gas, depending on the ratio of lime added to the process. The produced gas can be used in state-of-the-art combined cycles for electricity generation, producing almost no CO2 emissions or other harmful pollutants. The limestone is regenerated in a second reactor, where pure CO2 is produced, which can be either marketed to industry or sequestered in long term disposal areas. The simulation model of a Combined Cycle power plant, integrating the novel carbonation-calcination process, is based on available data from a typical natural gas fired Combined Cycle power plant. The natural gas fired power plant was adopted to firing with the low-C fuel, maintaining the basic operating characteristics. The performance of the novel concept power plant is compared to that of an IGCC with CO2 removal by means of Selexol absorption. Results from thermodynamic simulation, dealing with the most important features for CO2 reduction, are presented. The operating characteristics, as well as the main figures of the plant energy balances are included. A preliminary economic comparison is also provided, taking into account investment and operating costs, in order to estimate the electricity cost related to the two different technological approaches and the economic constrains towards the potentials for applications are examined. The cycle calculations were performed using the thermodynamic cycle calculation software ENBIPRO (ENergie-BIllanz-PROgram). ENBIPRO is a powerful tool for heat and mass balance calculations, solving complex thermodynamic circuits, calculating the efficiency, and allowing exergetic and exergoeconomic analysis of power plants. The software code models all pieces of equipment that usually appear in power plant installations and can accurately calculate all thermodynamic properties (temperature, pressure, enthalpy) at each node of the thermodynamic circuit, power consumption of each component, flue gas composition etc [1]. The code has proven its validity by accurately simulating a large number of power plants and through comparison of the results with other commercial software.


Actuators ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 111
Author(s):  
David Fassbender ◽  
Tatiana Minav

In recent years, a variety of novel actuator concepts for the implements of heavy-duty mobile machines (HDMMs) has been proposed by industry and academia. Mostly, novel concepts aim at improving the typically low energy efficiency of state-of-the-art hydraulic valve-controlled actuators. However, besides energy-efficiency, many aspects that are crucial for a successful concept integration are often neglected in studies. Furthermore, most of the time, a specific HDMM is focused as an application while other HDMM types can show very different properties that might make a novel concept less suitable. In order to take more aspects and HDMM types into account when evaluating actuator concepts, this paper proposes a novel evaluation algorithm, which calculates so-called mismatch values for each potential actuator-application match, based on different problem aspects that can indicate a potential mismatch between a certain actuator concept and an HDMM. The lower the mismatch value, which depends on actuator characteristics as well as HDMM attributes, the more potential is the match. At the same time, the modular nature of the algorithm allows to evaluate a large number of possible matches at once, with low effort. For the performance demonstration of the algorithm, 36 potential matches formed out of six actuator concepts and six HDMM types are exemplarily evaluated. The resulting actuator concept ratings for the six different HDMMs are in line with general reasoning and confirm that the evaluation algorithm is a powerful tool to get a first, quick overview of a large solution space of actuator-HDMM matches. However, analyzing the limitations of the algorithm also shows that it cannot replace conventional requirements engineering and simulation studies if detailed and reliable results are required.


2021 ◽  
Author(s):  
Aditeya Pandey ◽  
Uzma Haque Syeda ◽  
Chaitya Shah ◽  
John Alexis Guerra Gomez ◽  
Michelle Borkin

In the field of information visualization, the concept of ``tasks'' is an essential component of theories and methodologies for how a visualization researcher or a practitioner understands what tasks a user needs to perform and how to approach the creation of a new design. In this paper, we focus on the collection of tasks for tree visualizations, a common visual encoding in many domains ranging from biology to computer science to geography. In spite of their commonality, no prior efforts exist to collect and abstractly define tree visualization tasks. We present a literature review of tree visualization papers and generate a curated dataset of over 200 tasks. To enable effective task abstraction for trees, we also contribute a novel extension of the Multi-Level Task Typology to include more specificity to support tree-specific tasks as well as a systematic procedure to conduct task abstractions for tree visualizations. All tasks in the dataset were abstracted with the novel typology extension and analyzed to gain a better understanding of the state of tree visualizations. These abstracted tasks can benefit visualization researchers and practitioners as they design evaluation studies or compare their analytical tasks with ones previously studied in the literature to make informed decisions about their design. We also reflect on our novel methodology and advocate more broadly for the creation of task-based knowledge repositories for different types of visualizations. The Supplemental Material will be maintained on OSF:~\url{https://osf.io/u5ehs/


2021 ◽  
Vol 9 (1) ◽  
pp. 69-110
Author(s):  
Shailesh Kumar Shivakumar

In this paper, the authors introduce the novel concept of intent-based code search that categorizes code search goals into a hierarchy. They will explore state-of-the-art techniques in source code search covering various tools, techniques, and algorithms related to source code search. They will survey the code search field through the core use cases of code search such as code reusability, code understanding, and code repair. They propose a user intent-based taxonomy based on the code search goals. The code search goal taxonomy is derived based on deep analysis of literature survey of code search, and the taxonomy is validated based on their exclusive developer survey conducted as part of this paper. The code search goal taxonomy is based on logical categorization of code search goals and shared characteristics (query type, expected response, and such) for each of the categories in the taxonomy. The paper also details the latest trends and surveys the code search tools and the implications on tool design.


Energies ◽  
2021 ◽  
Vol 14 (21) ◽  
pp. 7223
Author(s):  
Zhishun Wei ◽  
Tharishinny Raja Mogan ◽  
Kunlei Wang ◽  
Marcin Janczarek ◽  
Ewa Kowalska

In the past few decades, extensive studies have been performed to utilize the solar energy for photocatalytic water splitting; however, up to the present, the overall efficiencies reported in the literature are still unsatisfactory for commercialization. The crucial element of this challenging concept is the proper selection and design of photocatalytic material to enable significant extension of practical application perspectives. One of the important features in describing photocatalysts, although underestimated, is particle morphology. Accordingly, this review presents the advances achieved in the design of photocatalysts that are dedicated to hydrogen generation, with an emphasis on the particle morphology and its potential correlation with the overall reaction performance. The novel concept of this work—with the content presented in a clear and logical way—is based on the division into five parts according to dimensional arrangement groups of 0D, 1D, 2D, 3D, and combined systems. In this regard, it has been shown that the consideration of the discussed aspects, focusing on different types of particle morphology and their correlation with the system’s efficiency, could be a promising route for accelerating the development of photocatalytic materials oriented for solar-driven hydrogen generation. Finally, concluding remarks (additionally including the problems connected with experiments) and potential future directions of particle morphology-based design of photocatalysts for hydrogen production systems have been presented.


Water ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 973 ◽  
Author(s):  
Sara Saravi ◽  
Roy Kalawsky ◽  
Demetrios Joannou ◽  
Monica Rivas Casado ◽  
Guangtao Fu ◽  
...  

The main focus of this paper is the novel use of Artificial Intelligence (AI) in natural disaster, more specifically flooding, to improve flood resilience and preparedness. Different types of flood have varying consequences and are followed by a specific pattern. For example, a flash flood can be a result of snow or ice melt and can occur in specific geographic places and certain season. The motivation behind this research has been raised from the Building Resilience into Risk Management (BRIM) project, looking at resilience in water systems. This research uses the application of the state-of-the-art techniques i.e., AI, more specifically Machin Learning (ML) approaches on big data, collected from previous flood events to learn from the past to extract patterns and information and understand flood behaviours in order to improve resilience, prevent damage, and save lives. In this paper, various ML models have been developed and evaluated for classifying floods, i.e., flash flood, lakeshore flood, etc. using current information i.e., weather forecast in different locations. The analytical results show that the Random Forest technique provides the highest accuracy of classification, followed by J48 decision tree and Lazy methods. The classification results can lead to better decision-making on what measures can be taken for prevention and preparedness and thus improve flood resilience.


Mathematics ◽  
2018 ◽  
Vol 6 (9) ◽  
pp. 161
Author(s):  
Mario Albert ◽  
Werner Seiler

We introduce the novel concept of a resolving decomposition of a polynomial module as a combinatorial structure that allows for the effective construction of free resolutions. It provides a unifying framework for recent results of the authors for different types of bases.


Author(s):  
Rocio Vargas ◽  
Amir Mosavi ◽  
Ramon Ruiz

Deep learning is an emerging area of machine learning (ML) research. It comprises multiple hidden layers of artificial neural networks. The deep learn- ing methodology applies nonlinear transformations and model abstractions of high level in large databases. The recent advancements in deep learning architec- tures within numerous fields have already provided significant contributions in artificial intelligence. This article presents a state of the art survey on the contri- butions and the novel applications of deep learning. The following review chron- ologically presents how and in what major applications deep learning algorithms have been utilized. Furthermore, the superior and beneficial of the deep learning methodology and its hierarchy in layers and nonlinear operations are presented and compared with the more conventional algorithms in the common applica- tions. The state of the art survey further provides a general overview on the novel concept and the ever-increasing advantages and popularity of deep learning.


10.29007/ggcf ◽  
2020 ◽  
Author(s):  
Selmer Bringsjord ◽  
Naveen Sundar Govindarajulu ◽  
John Licato ◽  
Michael Giancola

This paper introduces, philosophically and to a degree formally, the novel concept of learn- ing ex nihilo, intended (obviously) to be analogous to the concept of creation ex nihilo. Learning ex nihilo is an agent’s learning “from nothing”, by the suitable employment of inference schemata for deductive and inductive reasoning. This reasoning must be in machine-verifiable accord with a formal proof/argument theory in a cognitive calculus (i.e., here, roughly, an intensional higher-order multi-operator quantified logic), and this reasoning is applied to percepts received by the agent, in the context of both some prior knowledge, and some prior and current interests. Learning ex nihilo is a challenge to con- temporary forms of ML, indeed a severe one, but the challenge is here offered in the spirit of seeking to stimulate attempts, on the part of non-logicist ML researchers and engineers, to collaborate with those in possession of learning-ex nihilo frameworks, and eventually attempts to integrate directly with such frameworks at the implementation level. Such integration will require, among other things, the symbiotic interoperation of state-of-the- art automated reasoners and high-expressivity planners, with statistical/connectionist ML technology.


Sign in / Sign up

Export Citation Format

Share Document