time and energy
Recently Published Documents


TOTAL DOCUMENTS

1425
(FIVE YEARS 561)

H-INDEX

43
(FIVE YEARS 9)

2022 ◽  
Vol 15 (3) ◽  
pp. 1-32
Author(s):  
Nikolaos Alachiotis ◽  
Panagiotis Skrimponis ◽  
Manolis Pissadakis ◽  
Dionisios Pnevmatikatos

Disaggregated computer architectures eliminate resource fragmentation in next-generation datacenters by enabling virtual machines to employ resources such as CPUs, memory, and accelerators that are physically located on different servers. While this paves the way for highly compute- and/or memory-intensive applications to potentially deploy all CPUs and/or memory resources in a datacenter, it poses a major challenge to the efficient deployment of hardware accelerators: input/output data can reside on different servers than the ones hosting accelerator resources, thereby requiring time- and energy-consuming remote data transfers that diminish the gains of hardware acceleration. Targeting a disaggregated datacenter architecture similar to the IBM dReDBox disaggregated datacenter prototype, the present work explores the potential of deploying custom acceleration units adjacently to the disaggregated-memory controller on memory bricks (in dReDBox terminology), which is implemented on FPGA technology, to reduce data movement and improve performance and energy efficiency when reconstructing large phylogenies (evolutionary relationships among organisms). A fundamental computational kernel is the Phylogenetic Likelihood Function (PLF), which dominates the total execution time (up to 95%) of widely used maximum-likelihood methods. Numerous efforts to boost PLF performance over the years focused on accelerating computation; since the PLF is a data-intensive, memory-bound operation, performance remains limited by data movement, and memory disaggregation only exacerbates the problem. We describe two near-memory processing models, one that addresses the problem of workload distribution to memory bricks, which is particularly tailored toward larger genomes (e.g., plants and mammals), and one that reduces overall memory requirements through memory-side data interpolation transparently to the application, thereby allowing the phylogeny size to scale to a larger number of organisms without requiring additional memory.


2022 ◽  
Author(s):  
Diana Sînziana Duca ◽  
◽  
Maria Doina Schipor ◽  

We investigate in this work the relationship between the perceived demands of the teaching profession and the general sense of teachers’ self-efficacy in on-site and online teaching contexts. We present the results of a study with N= 127 Romanian teachers, with ages ranged from 19 to 55, with a mean age of 39,26 years, SD = 9,20 (123 females, 4 males; 73 from urban area, 54 from rural area). Our results show that the self-efficacy of teachers is lower in online professional activities, compared to the self-efficacy of teachers perceived in the on-site professional activities. In the case of the online teaching environment the teachers with high scores on teachers’ self-efficacy tends to consider as being more challenging when dealing with different levels of children's development, working with children with learning disabilities, who have a small number of attendances, who do not follow the received instructions and with children who need more time and energy compared to other children. We discuss implications of our results for policies and strategies to enhance the quality of teaching practices.


2022 ◽  
Vol 6 (1) ◽  
pp. 8
Author(s):  
Jhonny de Sá Rodrigues ◽  
Paulo Teixeira Gonçalves ◽  
Luis Pina ◽  
Fernando Gomes de Almeida

As the use of composite materials increases, the search for suitable automated processes gains relevance for guaranteeing production quality by ensuring the uniformity of the process, minimizing the amount of scrap generated, and reducing the time and energy consumption. Limitations on production by traditional means such as hand lay-up, vacuum bagging, and in-autoclave methods tend not to be as efficient when the size and shape complexity of the part being produced increases, motivating the search for alternative processes such as automated tape laying (ATL). This work aims to describe the process of modelling and simulating a composite ATL with in situ consolidation by characterizing the machine elements and using the finite differences method in conjunction with energy balances in order to create a digital twin of the process for further control design. The modelling approach implemented is able to follow the process dynamics when changes are made to the heating element and to predict the composite material temperature response, making it suitable for use as a digital twin of a production process using an ATL machine.


2022 ◽  
pp. 108886832110670
Author(s):  
Oliver Huxhold ◽  
Katherine L. Fiori ◽  
Tim Windsor

Empirical evidence about the development of social relationships across adulthood into late life continues to accumulate, but theoretical development has lagged behind. The Differential Investment of Resources (DIRe) model integrates these empirical advances. The model defines the investment of time and energy into social ties varying in terms of emotional closeness and kinship as the core mechanism explaining the formation and maintenance of social networks. Individual characteristics, acting as capacities, motivations, and skills, determine the amount, direction, and efficacy of the investment. The context (e.g., the living situation) affects the social opportunity structure, the amount of time and energy available, and individual characteristics. Finally, the model describes two feedback loops: (a) social capital affecting the individual’s living situation and (b) different types of ties impacting individual characteristics via social exchanges, social influences, and social evaluations. The proposed model will provide a theoretical basis for future research and hypothesis testing.


Author(s):  
Sanjay Kumar Roy ◽  
Kamal Kumar Sharma ◽  
Brahmadeo Prasad Singh

A novel article presents the RC-notch filter function using the floating admittance matrix approach. The main advantages of the approach underlined the easy implementation and effective computation. The proposed floating admittance matrix (FAM) method is unique, and the same can be used for all types of electronic circuits. This method takes advantage of the partitioning technique for a large network. The sum property of all the elements of any row or any column equal to zero provides the assurance to proceed further for analysis or re-observe the very first equation at the first instant itself. This saves time and energy. The FAM method presented here is so simple that anybody with slight knowledge of electronics but understating the matrix maneuvering can analyze any circuit to derive all types of transfer functions. The mathematical modelling using the FAM method allows the designer to adjust their design at any stage of analysis comfortably. These statements provide compelling reasons for the adoption of the proposed process and demonstrate its benefits.


2022 ◽  
Vol 17 (1) ◽  
Author(s):  
Chien-Ping Wang ◽  
Burn Jeng Lin ◽  
Pin-Jiun Wu ◽  
Jiaw-Ren Shih ◽  
Yue-Der Chih ◽  
...  

AbstractAn on-wafer micro-detector for in situ EUV (wavelength of 13.5 nm) detection featuring FinFET CMOS compatibility, 1 T pixel and battery-less sensing is demonstrated. Moreover, the detection results can be written in the in-pixel storage node for days, enabling off-line and non-destructive reading. The high spatial resolution micro-detectors can be used to extract the actual parameters of the incident EUV on wafers, including light intensity, exposure time and energy, key to optimization of lithographic processes in 5 nm FinFET technology and beyond.


2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
N. Arivazhagan ◽  
K. Somasundaram ◽  
D. Vijendra Babu ◽  
M. Gomathy Nayagam ◽  
R. M. Bommi ◽  
...  

Considering task dependencies, the balancing of the Internet of Health Things (IoHT) scheduling is considered important to reduce the make span rate. In this paper, we developed a smart model approach for the best task schedule of Hybrid Moth Flame Optimization (HMFO) for cloud computing integrated in the IoHT environment over e-healthcare systems. The HMFO guarantees uniform resource assignment and enhanced quality of services (QoS). The model is trained with the Google cluster dataset such that it learns the instances of how a job is scheduled in cloud and the trained HMFO model is used to schedule the jobs in real time. The simulation is conducted on a CloudSim environment to test the scheduling efficacy of the model in hybrid cloud environment. The parameters used by this method for the performance assessment include the use of resources, response time, and energy utilization. In terms of response time, average run time, and lower costs, the hybrid HMFO approach has offered increased response rate with reduced cost and run time than other methods.


2022 ◽  
pp. 1619-1637
Author(s):  
Edward Anthony Delgado-Romero ◽  
Grace Ellen Mahoney ◽  
Nancy J. Muro-Rodriguez ◽  
Jhokania De Los Santos ◽  
Javier L. Romero-Heesacker

This chapter involves the issues in the creation of a bilingual and culturally competent psychological clinic in a university town in a southern state in the United States known as one of the most Latinx immigrant hostile states in the country. Prior to the creation of the clinic, there were virtually no options for Spanish speakers for culturally or linguistically competent psychological services, and the population of bilingual/bicultural graduate students in psychology and the college of education was very low. This chapter is written from the perspective of the faculty founder of the clinic and the women who have served as clinic coordinators and sacrificed much time and energy in addition to their significant program requirements so that the local Latinx immigration could have linguistically and culturally competent psychological services. Thus, this chapter will blend the available research literature with the experiences of creating and running a clinic that supports many Latinx immigrant students and their families.


2022 ◽  
Vol 17 (01) ◽  
pp. C01004
Author(s):  
Jelena Mijuskovic

Abstract The electromagnetic calorimeter (ECAL) of the CMS detector has played an important role in the physics program of the experiment, delivering outstanding performance throughout data taking. The high-luminosity LHC will pose new challenges. The four to five-fold increase of the number of interactions per bunch crossing will require superior time resolution and noise rejection capabilities. For these reasons the electronics readout has been completely redesigned. A dual gain trans-impedance amplifier and an ASIC providing two 160 MHz ADC channels, gain selection, and data compression will be used in the new readout electronics. The trigger decision will be moved off-detector and will be performed by powerful and flexible FPGA processors, allowing for more sophisticated trigger algorithms to be applied. The upgraded ECAL will be capable of high-precision energy measurements throughout HL-LHC and will greatly improve the time resolution for photons and electrons above 10 GeV.


Author(s):  
Komal . ◽  
Gaurav Goel ◽  
Milanpreet Kaur

As a platform for offering on-demand services, cloud computing has increased in relevance and appeal. It has a pay-per-use model for its services. A cloud service provider's primary goal is to efficiently use resources by reducing execution time, cost, and other factors while increasing profit. As a result, effective scheduling algorithms remain a key issue in cloud computing, and this topic is categorized as an NP-complete problem. Researchers previously proposed several optimization techniques to address the NP-complete problem, but more work is needed in this area. This paper provides an overview of strategy for successful task scheduling based on a hybrid heuristic approach for both regular and larger workloads. The previous method handles the jobs adequately, but its performance degrades as the task size becomes larger. The proposed optimum scheduling method employs two distinct techniques to select the suitable VM for the specified job. To begin, it enhances the LJFP method by employing OSIG, an upgraded version of the Genetic Algorithm, to choose solutions with improved fitness factors, crossover, and mutation operators. This selection returns the best machines, and PSO then chooses one for a specific job. The appropriate machine is chosen depending on several factors, including the expected execution time, current load, and energy usage. The proposed algorithm's performance is assessed using two distinct cloud scenarios with various VMs and tasks, and overall execution time and energy usage are calculated. The proposed algorithm outperforms existing techniques in terms of energy and average execution time usage in both scenarios.


Sign in / Sign up

Export Citation Format

Share Document