scholarly journals Design of a Clutchless Hybrid Transmission for a High-Performance Vehicle

Author(s):  
Chad L. Jacoby ◽  
Young Suk Jo ◽  
Jake Jurewicz ◽  
Guillermo Pamanes ◽  
Joshua E. Siegel ◽  
...  

There exists the potential for major simplifications to current hybrid transmission architectures, which can lead to advances in powertrain performance. This paper assesses the technical merits of various hybrid powertrains in the context of high-performance vehicles and introduces a new transmission concept targeted at high performance hybrid applications. While many hybrid transmission configurations have been developed and implemented in mainstream and even luxury vehicles, ultra high performance sports cars have only recently begun to hybridize. The unique performance requirements of such vehicles place novel constraints on their transmissions designs. The goals become less about improved efficiency and smoothness and more centered on weight reduction, complexity reduction, and performance improvement. To identify the most critical aspects of a high performance transmission, a wide range of existing technologies is studied in concert with basic physical performance analysis of electrical motors and an internal combustion engine. The new transmission concepts presented here emphasize a reduction in inertial, frictional, and mechanical losses. A series of conceptual powertrain designs are evaluated against the goals of reducing mechanical complexity and maintaining functionality. The major innovation in these concepts is the elimination of a friction clutch to engage and disengage gears. Instead, the design proposes that the inclusion of a large electric motor enables the gears to be speed-matched and torque-zeroed without the inherent losses associated with a friction clutch. Additionally, these transmission concepts explore the merits of multiple electric motors and their placement as well as the reduction in synchronization interfaces. Ultimately, two strategies for speed-matched gear sets are considered, and a speed-matching prototype of the chosen methodology is presented to validate the feasibility of the proposed concept. The power flow and operational modes of both transmission architectures are studied to ensure required functionality and identify further areas of optimization. While there are still many unanswered questions about this concept, this paper introduces the base analysis and proof of concept for a technology that has great potential to advance hybrid vehicles at all levels.

Author(s):  
Ruzimov Sanjarbek ◽  
Jamshid Mavlonov ◽  
Akmal Mukhitdinov

The paper aims to present an analysis of the component sizes of commercially available vehicles with electrified powertrains. The paper provides insight into how the powertrain components (an internal combustion engine, an electric motor and a battery) of mass production electrified vehicles are sized. The data of wide range of mass production electrified vehicles are collected and analyzed. Firstly, the main requirements to performance of a vehicle are described. The power values to meet the main performance requirements are calculated and compared to the real vehicle data. Based on the calculated values of the power requirements the minimum sizes of the powertrain components are derived. The paper highlights how the sizing methodologies, described in the research literature, are implemented in sizing the powertrain of the commercially available electrified vehicles.


Author(s):  
Cameron L. Mock ◽  
Zachary T. Hamilton ◽  
Dustin Carruthers ◽  
John F. O’Brien

Measures to reduce control performance for greater robustness (e.g. reduced bandwidth, shallow loop roll-off) must be enhanced if the plant or actuators are known to have nonlinear characteristics that cause variations in loop transmission. Common causes of these nonlinear behaviors are actuator saturation and friction/stiction in the moving parts of mechanical systems. Systems with these characteristics that also have stringent closed loop performance requirements present the control designer with an extremely challenging problem. A design method for these systems is presented that combines very aggressive Nyquist-stable linear control to provide large negative feedback with nonlinear feedback to compensate for the effects of multiple nonlinearities in the loop that threaten stability and performance. The efficacy of this approach is experimentally verified on a parallel kinematic mechanism with multiple uncertain nonlinearities used for vibration suppression.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9414 ◽  
Author(s):  
David Bridges ◽  
Alain Pitiot ◽  
Michael R. MacAskill ◽  
Jonathan W. Peirce

Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioral experiments and measure response times and performance of participants. Very little information is available, however, on what timing performance they achieve in practice. Here we report a wide-ranging study looking at the precision and accuracy of visual and auditory stimulus timing and response times, measured with a Black Box Toolkit. We compared a range of popular packages: PsychoPy, E-Prime®, NBS Presentation®, Psychophysics Toolbox, OpenSesame, Expyriment, Gorilla, jsPsych, Lab.js and Testable. Where possible, the packages were tested on Windows, macOS, and Ubuntu, and in a range of browsers for the online studies, to try to identify common patterns in performance. Among the lab-based experiments, Psychtoolbox, PsychoPy, Presentation and E-Prime provided the best timing, all with mean precision under 1 millisecond across the visual, audio and response measures. OpenSesame had slightly less precision across the board, but most notably in audio stimuli and Expyriment had rather poor precision. Across operating systems, the pattern was that precision was generally very slightly better under Ubuntu than Windows, and that macOS was the worst, at least for visual stimuli, for all packages. Online studies did not deliver the same level of precision as lab-based systems, with slightly more variability in all measurements. That said, PsychoPy and Gorilla, broadly the best performers, were achieving very close to millisecond precision on several browser/operating system combinations. For response times (measured using a high-performance button box), most of the packages achieved precision at least under 10 ms in all browsers, with PsychoPy achieving a precision under 3.5 ms in all. There was considerable variability between OS/browser combinations, especially in audio-visual synchrony which is the least precise aspect of the browser-based experiments. Nonetheless, the data indicate that online methods can be suitable for a wide range of studies, with due thought about the sources of variability that result. The results, from over 110,000 trials, highlight the wide range of timing qualities that can occur even in these dedicated software packages for the task. We stress the importance of scientists making their own timing validation measurements for their own stimuli and computer configuration.


Author(s):  
Abdel-Hamid I. Mourad

In the recent years, blending of different polymers is receiving increasing attention from researchers for various reasons including the possibility of creating a material or product for new and more industrial applications to meet specific processing and performance requirements that cannot be satisfied by a single component. Polyethylene (PE) and polypropylene (PP) and their blends have attracted a lot of attention due to their potential industrial applications such as piping systems in pressure vessels and pipelines. The main objective of this work is to study the effect of the thermal treatment/aging and PE/PP blending ratio (composition range) on the mechanical behaviour (tensile and hardness) of PE, PP and PE/PP blends. Samples of PE/PP blends containing 100/00, 75/25, 50/50, 25/75 and 0/100 weight percentage were prepared via injection molding technique and thermally treated/aged at 100 °C for 0, 2, 4, 7, 14 days. The tensile measurements indicated that the yield strength and the modulus decrease with increasing PE content. It was also observed that PE, PP and their blends deform in ductile modes. They undergo a uniform yielding over a wide range of deformation, which is followed by strain hardening and then failure. The strain to break for pure PE is found to be much higher than that for pure PP and for their blends, intermediate values have been observed. The hardness measurements have also revealed that increasing PE content in PE/PP blends reduced the hardness value of PP, however thermal aging hasn’t affected the hardness showing a good correlation with the tensile properties.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-26
Author(s):  
Guihong Li ◽  
Sumit K. Mandal ◽  
Umit Y. Ogras ◽  
Radu Marculescu

Neural architecture search (NAS) is a promising technique to design efficient and high-performance deep neural networks (DNNs). As the performance requirements of ML applications grow continuously, the hardware accelerators start playing a central role in DNN design. This trend makes NAS even more complicated and time-consuming for most real applications. This paper proposes FLASH, a very fast NAS methodology that co-optimizes the DNN accuracy and performance on a real hardware platform. As the main theoretical contribution, we first propose the NN-Degree, an analytical metric to quantify the topological characteristics of DNNs with skip connections (e.g., DenseNets, ResNets, Wide-ResNets, and MobileNets). The newly proposed NN-Degree allows us to do training-free NAS within one second and build an accuracy predictor by training as few as 25 samples out of a vast search space with more than 63 billion configurations. Second, by performing inference on the target hardware, we fine-tune and validate our analytical models to estimate the latency, area, and energy consumption of various DNN architectures while executing standard ML datasets. Third, we construct a hierarchical algorithm based on simplicial homology global optimization (SHGO) to optimize the model-architecture co-design process, while considering the area, latency, and energy consumption of the target hardware. We demonstrate that, compared to the state-of-the-art NAS approaches, our proposed hierarchical SHGO-based algorithm enables more than four orders of magnitude speedup (specifically, the execution time of the proposed algorithm is about 0.1 seconds). Finally, our experimental evaluations show that FLASH is easily transferable to different hardware architectures, thus enabling us to do NAS on a Raspberry Pi-3B processor in less than 3 seconds.


2020 ◽  
Author(s):  
David Bridges ◽  
Alain Pitiot ◽  
Michael R. MacAskill ◽  
Jonathan Westley Peirce

Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioural experiments and measure response times and performance of participants. Very little information is available, however, on what timing performance they achieve in practice. Here we report a wide-ranging study looking at the precision and accuracy of visual and auditory stimulus timing and response times, measured with a Black Box Toolkit. We compared a range of popular packages: PsychoPy, E-Prime®, NBS Presentation®, Psychophysics Toolbox, OpenSesame, Expyriment, Gorilla, jsPsych, Lab.js and Testable. Where possible, the packages were tested on Windows, MacOS, and Ubuntu, and in a range of browsers for the online studies, to try to identify common patterns in performance. Among the lab-based experiments, Psychtoolbox, PsychoPy, Presentation and E-Prime provided the best timing, all with mean precision under 1 millisecond across the visual, audio and response measures. OpenSesame had slightly less precision across the board, but most notably in audio stimuli and Expyriment had rather poor precision. Across operating systems, the pattern was that precision was generally very slightly better under Ubuntu than Windows, and that Mac OS was the worst, at least for visual stimuli, for all packages. Online studies did not deliver the same level of precision as lab-based systems, with slightly more variability in all measurements. That said, PsychoPy and Gorilla, broadly the best performers, were achieving very close to millisecond precision on a number of browser configurations. For response times (using a high-performance button box), most of the packages achieved precision at least under 10 ms in all browsers, with PsychoPy achieving a precision under 3.5 ms in all. There was considerable variability between operating systems and browsers, especially in audio-visual synchrony which is the least precise aspect of the browser-based experiments. Nonetheless, the data indicate that online methods can be suitable for a wide range of studies, with due thought about the sources of variability that result.The results, from over 110,000 trials, highlight the wide range of timing qualities that can occur even in these dedicated software packages for the task. We stress the importance of scientists making their own timing validation measurements for their own stimuli and computer configuration.


Author(s):  
David Pfander ◽  
Gregor Daiß ◽  
Dirk Pflüger

Clustering is an important task in data mining that has become more challenging due to the ever-increasing size of available datasets. To cope with these big data scenarios, a high-performance clustering approach is required. Sparse grid clustering is a density-based clustering method that uses a sparse grid density estimation as its central building block. The underlying density estimation approach enables the detection of clusters with non-convex shapes and without a predetermined number of clusters. In this work, we introduce a new distributed and performance-portable variant of the sparse grid clustering algorithm that is suited for big data settings. Our compute kernels were implemented in OpenCL to enable portability across a wide range of architectures. For distributed environments, we added a manager-worker scheme that was implemented using MPI. In experiments on two supercomputers, Piz Daint and Hazel Hen, with up to 100 million data points in a 10-dimensional dataset, we show the performance and scalability of our approach. The dataset with 100 million data points was clustered in 1198s using 128 nodes of Piz Daint. This translates to an overall performance of 352TFLOPS. On the node-level, we provide results for two GPUs, Nvidia's Tesla P100 and the AMD FirePro W8100, and one processor-based platform that uses Intel Xeon E5-2680v3 processors. In these experiments, we achieved between 43% and 66% of the peak performance across all compute kernels and devices, demonstrating the performance portability of our approach.


2020 ◽  
Vol 184 ◽  
pp. 01102
Author(s):  
P Magudeaswaran. ◽  
C. Vivek Kumar ◽  
Rathod Ravinder

High-Performance Concrete (HPC) is a high-quality concrete that requires special conformity and performance requirements. The objective of this study was to investigate the possibilities of adapting neural expert systems like Adaptive Neuro-Fuzzy Inference System (ANFIS) in the development of a simulator and intelligent system and to predict durability and strength of HPC composites. These soft computing methods emulate the decision-making ability of human expert benefits both the construction industry and the research community. These new methods, if properly utilized, have the potential to increase speed, service life, efficiency, consistency, minimizes errors, saves time and cost which would otherwise be squandered using the conventional approaches.


2019 ◽  
Vol 29 (07) ◽  
pp. 2050111
Author(s):  
Basma H. Mohamed ◽  
Ahmed Taha ◽  
Ahmed Shawky ◽  
Essraa Ahmed ◽  
Ali Mohamed ◽  
...  

With the new age of technology and the release of the Internet of Things (IoT) revolution, there is a need to connect a wide range of devices with varying throughput and performance requirements. In this paper, a digital transmitter of NarrowBand Internet of Things (NB-IoT) is proposed targeting very low power and delay-insensitive IoT applications with low throughput requirements. NB-IoT is a new cellular technology introduced by 3GPP in release 13 to provide wide-area coverage for the IoT. The low-cost receivers for such devices should have very low complexity, consume low power and hence run for several years. In this paper, the implementation of the data path chain of digital uplink transmitter is presented. The standard specifications are studied carefully to determine the required design parameters for each block. And the design is synthesized in UMC 130-nm technology.


Algorithms ◽  
2019 ◽  
Vol 12 (3) ◽  
pp. 60 ◽  
Author(s):  
David Pfander ◽  
Gregor Daiß ◽  
Dirk Pflüger

Clustering is an important task in data mining that has become more challenging due to the ever-increasing size of available datasets. To cope with these big data scenarios, a high-performance clustering approach is required. Sparse grid clustering is a density-based clustering method that uses a sparse grid density estimation as its central building block. The underlying density estimation approach enables the detection of clusters with non-convex shapes and without a predetermined number of clusters. In this work, we introduce a new distributed and performance-portable variant of the sparse grid clustering algorithm that is suited for big data settings. Our computed kernels were implemented in OpenCL to enable portability across a wide range of architectures. For distributed environments, we added a manager–worker scheme that was implemented using MPI. In experiments on two supercomputers, Piz Daint and Hazel Hen, with up to 100 million data points in a ten-dimensional dataset, we show the performance and scalability of our approach. The dataset with 100 million data points was clustered in 1198 s using 128 nodes of Piz Daint. This translates to an overall performance of 352 TFLOPS . On the node-level, we provide results for two GPUs, Nvidia’s Tesla P100 and the AMD FirePro W8100, and one processor-based platform that uses Intel Xeon E5-2680v3 processors. In these experiments, we achieved between 43% and 66% of the peak performance across all computed kernels and devices, demonstrating the performance portability of our approach.


Sign in / Sign up

Export Citation Format

Share Document