usage efficiency
Recently Published Documents


TOTAL DOCUMENTS

124
(FIVE YEARS 41)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 17 (13) ◽  
pp. 151-156
Author(s):  
P. Smirnov ◽  
B. Subbotin ◽  
V. Klimenko

The article discusses approaches to assessing the effectiveness of the use of heavy vehicles on the example of equipment for the transportation of bulk cargo. The modernization of the existing methods for assessing the effectiveness of its application and the choice of specific operating conditions in the presence of uncertainties and existing restrictions in the environment is pro-posed, a methodological approach to solving this problem based on the classical theory of linear programming is proposed. As a result, within the framework of the indicated practical problem, the range of optimal solutions for the value of the load-carrying capacity utilization factor has been determined. The modernization of the existing methods for assessing vehicles based on determining the efficiency of their operation, thus, can be recommended for use by the engineer-ing and technical personnel of motor transport enterprises.


Author(s):  
Rajendra Kumar ◽  
Sunil Kumar Khatri ◽  
Mario José Diván

The rapid increase in the IT infrastructure has led to demands in more Data Center Space & Power to fulfil the Information and Communication Technology (ICT) services hosting requirements. Due to this, more electrical power is being consumed in Data Centers therefore Data Center power & cooling management has become quite an important and challenging task. Direct impacting aspects affecting the power energy of data centers are power and commensurate cooling losses. It is difficult to optimise the Power Usage Efficiency (PUE) of the Data Center using conventional methods which essentially need knowledge of each Data Center facility and specific equipment and its working. Hence, a novel optimization approach is necessary to optimise the power and cooling in the data center. This research work is performed by varying the temperature in the data center through a machine learning-based linear regression optimization technique. From the research, the ideal temperature is identified with high accuracy based on the prediction technique evolved out of the available data. With the proposed model, the PUE of the data center can be easily analysed and predicted based on temperature changes maintained in the Data Center. As the temperature is raised from 19.73 oC to 21.17 oC, then the cooling load is decreased in the range 607 KW to 414 KW. From the result, maintaining the temperature at the optimum value significantly improves the Data Center PUE and same time saves power within the permissible limits.


Inventions ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 67
Author(s):  
Hou Yip Cheng ◽  
Poh Kiat Ng ◽  
Robert Jeyakumar Nathan ◽  
Adi Saptari ◽  
Yu Jin Ng ◽  
...  

Foldable furniture is a trend of the modern furniture industry. However, apart from limitations attributed to multifunctionality and space saving characteristics, a complete design process documentation of foldable furniture is uncommon in furniture research. This study aims to develop a space-saving multipurpose table for improved ergonomic performance. Features and functions are extracted from research articles and patents for concept generation. The final concept is modelled using Autodesk Inventor Professional 2019. Mechanical simulations are done to confirm the structural integrity of the invention before prototyping and testing. The tests accounted for usage efficiency, space and usability. Using Minitab 19, the experimental data are analysed with t-tests. The survey data are analysed using Spearman’s correlation test via IBM SPSS Statistics version 21. Participants were able to complete tasks around 1.1–1.5 times faster with the proposed invention than with single-function furniture items. The amount of space occupied with the proposed invention was approximately 25–80% lesser than with the single-function furniture items placed together. The survey analysis demonstrated that there was a strong, positive and significant correlation between space saving effectiveness and ergonomic performance. Further developments to transition this invention to its commercialisation phase should be done to facilitate daily living domestic activities of society at large.


2021 ◽  
Author(s):  
Junpeng SHI ◽  
Kezhao LI ◽  
Lin CHAI ◽  
Lingfeng LIANG ◽  
Chengdong TIAN ◽  
...  

Abstract The usage efficiency of GNSS multisystem observation data can be greatly improved by applying rational satellite selection algorithms. Such algorithms can also improve the real-time reliability and accuracy of navigation. By combining the Sherman-Morrison formula and singular value decomposition (SVD), a smaller geometric dilution of precision (GDOP) value method with increasing number of visible satellites is proposed. Moreover, by combining this smaller GDOP value method with the maximum volume of tetrahedron method, a new rapid satellite selection algorithm based on the Sherman-Morrison formula for GNSS multisystems is proposed. The basic idea of the algorithm is as follows: first, the maximum volume of tetrahedron method is used to obtain four initial reference satellites; then, the visible satellites are co-selected by using the smaller GDOP value method to reduce the GDOP value and improve the accuracy of the overall algorithm. By setting a reasonable precise threshold, the satellite selection algorithm can be autonomously run without intervention. The experimental results based on measured data indicate that (1) the GDOP values in most epochs over the whole period obtained with the satellite selection algorithm based on the Sherman-Morrison formula are less than 2. Furthermore, compared with the optimal estimation results of the GDOP for all visible satellites, the results of this algorithm can meet the requirements of high-precision navigation and positioning when the corresponding number of selected satellites reaches 13. Moreover, as the number of selected satellites continues to increase, the calculation time increases, but the decrease in the GDOP value is not obvious. (2) The algorithm includes an adaptive function based on the end indicator of the satellite selection calculation and the reasonable threshold. When the reasonable precise threshold is set to 0.01, the selected number of satellites in most epochs is less than 13. Furthermore, when the number of selected satellites reaches 13, the GDOP value is less than 2, and the corresponding probability is 93.54%. These findings verify that the proposed satellite selection algorithm based on the Sherman-Morrison formula provides autonomous functionality and high-accuracy results.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Arturo Téllez-Velázquez ◽  
Raúl Cruz-Barbosa

Given the high algorithmic complexity of applied-to-images Fast Fourier Transforms (FFT), computational-resource-usage efficiency has been a challenge in several engineering fields. Accelerator devices such as Graphics Processing Units are very attractive solutions that greatly improve processing times. However, when the number of images to be processed is large, having a limited amount of memory is a serious problem. This can be faced by using more accelerators or using higher-capability accelerators, which implies higher costs. The separability property is a resource in hardware approaches that is frequently used to divide the two-dimensional FFT work into several one-dimensional FFTs, which can be simultaneously processed by several computing units. Then, a feasible alternative to address this problem is distributed computing through an Apache Spark cluster. However, determining the separability-property feasibility in distributed systems, when migrating from hardware implementations, is not evident. For this reason, in this paper a comparative study is presented between distributed versions of two-dimensional FFTs using the separability property to determine the suitable way to process large image sets using both Spark RRDs and DataFrame APIs.


2021 ◽  
Author(s):  
Zekui Lyu ◽  
Qingsong Xu

Abstract Microgripper acts as an end-effector in the microassembly system, which completes pick-transport-release actions during the assembly process. Usually, the working space of the microassembly robot is small, and the operating environment is complicated. In addition, the assembled micro-objects are light, thin, brittle, and prone to damage. Thus, the microgripper should be able to provide a compact construction, a suitable clamping range, and safe clamping force for microassembly use. This paper presents the design, modeling, optimization, and simulation of a new piezoelectrically actuated compliant microgripper for microassembly application. The designed clamp has the advantages of large motion displacement and high area usage efficiency. A three-stage amplification mechanism based on bridge-type mechanism and leverage mechanism arranged in series is introduced to achieve a large jaw displacement. The optimization based on response surface analysis has been applied to determine the structural parameters of the amplification mechanism. The displacement amplification ratio of the microgripper is analyzed via the pseudo-rigid-body model approach. Finite element analysis is conducted to evaluate and validate the performance of the gripper. The simulation results indicate that the gripper can achieve a maximum gripping displacement of 545.12 μm with an area usage efficiency of 370.32 nm/mm2, which is better than available designs in the literature.


2021 ◽  
Author(s):  
Sara Beier ◽  
Johannes Werner ◽  
Thierry Bouvier ◽  
Nicolas Mouquet ◽  
Cyrille Violle

We report genomic traits that have been associated with the life history of prokaryotes and highlight conflicting findings concerning earlier observed trait correlations and tradeoffs. In order to address possible explanations for these contradictions we examined trait-trait variations of 11 genomic traits from ~ 17,000 sequenced genomes. The studied trait-trait variations suggested: (i) the predominance of two resistance and resilience-related orthogonal axes , (ii) an overlap between a resilience axis and an axis of resource usage efficiency. These findings imply that resistance associated traits of prokaryotes are globally decoupled from resilience and resource use efficiencies associated traits. However, further inspection of pairwise scatterplots showed that resistance and resilience traits tended to be positively related for genomes up to roughly five million base pairs and negatively for larger genomes. This in turn precludes a globally consistent assignment of prokaryote genomic traits to the competitor - stress-tolerator -ruderal (CSR) schema that sorts species depending on their location along disturbance and productivity gradients into three ecological strategies and may serve as an explanation for conflicting findings from earlier studies. All reviewed genomic traits featured significant phylogenetic signals and we propose that our trait table can be applied to extrapolate genomic traits from taxonomic marker genes. This will enable to empirically evaluate the assembly of these genomic traits in prokaryotic communities from different habitats and under different productivity and disturbance scenarios as predicted via the resistance-resilience framework formulated here.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1106
Author(s):  
Vladimir L. Petrović ◽  
Dragomir M. El Mezeni ◽  
Andreja Radošević

Quasi-cyclic low-density parity-check (QC–LDPC) codes are introduced as a physical channel coding solution for data channels in 5G new radio (5G NR). Depending on the use case scenario, this standard proposes the usage of a wide variety of codes, which imposes the need for high encoder flexibility. LDPC codes from 5G NR have a convenient structure and can be efficiently encoded using forward substitution and without computationally intensive multiplications with dense matrices. However, the state-of-the-art solutions for encoder hardware implementation can be inefficient since many hardware processing units stay idle during the encoding process. This paper proposes a novel partially parallel architecture that can provide high hardware usage efficiency (HUE) while achieving encoder flexibility and support for all 5G NR codes. The proposed architecture includes a flexible circular shifting network, which is capable of shifting a single large bit vector or multiple smaller bit vectors depending on the code. The encoder architecture was built around the shifter in a way that multiple parity check matrix elements can be processed in parallel for short codes, thus providing almost the same level of parallelism as for long codes. The processing schedule was optimized for minimal encoding time using the genetic algorithm. The optimized encoder provided high throughputs, low latency, and up-to-date the best HUE.


Author(s):  
Anamika CR Et.al

Nanofiber (NF) polymeric composites have received more interest nowadays. The majority of the study focuses on characterizing NF and comparing them to traditional composites in terms of mechanical behavior and usage efficiency. There are different varieties of NFs, each with unique possessions that influence whether or not they are used in particular industrial usage. Because of the natural source of these materials, they have an extensive variety of characteristics that are largely dependent on the gathering position and conditions, assembly it tough to choose the right fiber for precise usage. This study aims to map where every form of fiber was located in numerous assets by providing a detailed analysis of the characteristics of NF employed as composite-based materials reinforcement. Recent research on emerging forms of fibers was also discussed. A bibliometric analysis of NF composite applications is discussed. A future trend analysis of NF applications, as well as the essential innovations to extend their uses were also addressed.


Sign in / Sign up

Export Citation Format

Share Document