scholarly journals Hardware–Software Co-Design for Decimal Multiplication

Computers ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 17
Author(s):  
Riaz-ul-haque Mian ◽  
Michihiro Shintani ◽  
Michiko Inoue

Decimal arithmetic using software is slow for very large-scale applications. On the other hand, when hardware is employed, extra area overhead is required. A balanced strategy can overcome both issues. Our proposed methods are compliant with the IEEE 754-2008 standard for decimal floating-point arithmetic and combinations of software and hardware. In our methods, software with some area-efficient decimal component (hardware) is used to design the multiplication process. Analysis in a RISC-V-based integrated co-design evaluation framework reveals that the proposed methods provide several Pareto points for decimal multiplication solutions. The total execution process is sped up by 1.43× to 2.37× compared with a full software solution. In addition, 7–97% less hardware is required compared with an area-efficient full hardware solution.

Author(s):  
Mário Pereira Vestias

IEEE-754 2008 has extended the standard with decimal floating point arithmetic. Human-centric applications, like financial and commercial, depend on decimal arithmetic since the results must match exactly those obtained by human calculations without being subject to errors caused by decimal to binary conversions. Decimal Multiplication is a fundamental operation utilized in many algorithms and it is referred in the standard IEEE-754 2008. Decimal multiplication has an inherent difficulty associated with the representation of decimal numbers using a binary number system. Both bit and digit carries, as well as invalid results, must be considered in decimal multiplication in order to produce the correct result. This article focuses on algorithms for hardware implementation of decimal multiplication. Both decimal fixed-point and floating-point multiplication are described, including iterative and parallel solutions.


Author(s):  
Mário Pereira Vestias

IEEE-754 2008 has extended the standard with decimal floating-point arithmetic. Human-centric applications, like financial and commercial, depend on decimal arithmetic since the results must match exactly those obtained by human calculations without being subject to errors caused by decimal to binary conversions. Decimal multiplication is a fundamental operation utilized in many algorithms, and it is referred in the standard IEEE-754 2008. Decimal multiplication has an inherent difficulty associated with the representation of decimal numbers using a binary number system. Both bit and digit carries, as well as invalid results, must be considered in decimal multiplication in order to produce the correct result. This chapter focuses on algorithms for hardware implementation of decimal multiplication. Both decimal fixed-point and floating-point multiplication are described, including iterative and parallel solutions.


2016 ◽  
Vol 12 (5) ◽  
pp. e513-e526 ◽  
Author(s):  
Madeline Li ◽  
Alyssa Macedo ◽  
Sean Crawford ◽  
Sabira Bagha ◽  
Yvonne W. Leung ◽  
...  

Purpose: Systematic screening for distress in oncology clinics has gained increasing acceptance as a means to improve cancer care, but its implementation poses enormous challenges. We describe the development and implementation of the Distress Assessment and Response Tool (DART) program in a large urban comprehensive cancer center. Method: DART is an electronic screening tool used to detect physical and emotional distress and practical concerns and is linked to triaged interprofessional collaborative care pathways. The implementation of DART depended on clinician education, technological innovation, transparent communication, and an evaluation framework based on principles of change management and quality improvement. Results: There have been 364,378 DART surveys completed since 2010, with a sustained screening rate of > 70% for the past 3 years. High staff satisfaction, increased perception of teamwork, greater clinical attention to the psychosocial needs of patients, patient-clinician communication, and patient satisfaction with care were demonstrated without a resultant increase in referrals to specialized psychosocial services. DART is now a standard of care for all patients attending the cancer center and a quality performance indicator for the organization. Conclusion: Key factors in the success of DART implementation were the adoption of a programmatic approach, strong institutional commitment, and a primary focus on clinic-based response. We have demonstrated that large-scale routine screening for distress in a cancer center is achievable and has the potential to enhance the cancer care experience for both patients and staff.


2020 ◽  
Vol 12 (4) ◽  
Author(s):  
Vesa Jormanainen ◽  
Jarmo Reponen

We report the large-scale deployment, implementation and adoption of the nationwide centralized integrated and shared Kanta health information services by using the Clinical Adoption Framework (CAF). The meso and macro level dimensions of the CAF were incorporated early into our e-health evaluation framework to assess Health Information System (HIS) implementation at the national level. We found strong support for the CAF macro level model concepts in Finland. Typically, development programs were followed by government policy commitments, appropriate legislation and state budget funding before the CAF meso level implementation activities. Our quantitative data point to the fact that implementing large-scale health information technology (HIT) systems in practice is a rather long process. For HIT systems success in particular citizens’ and professionals’ acceptance are essential. When implementation of the national health information systems was evaluated against Clinical Adoption Meta-Model (CAMM), the results show that Finland has already passed many milestones in CAMM archetypes. According to our study results, Finland seems to be a good laboratory entity to study practical execution of HIT systems, CAF and CAMM theoretical constructs can be used for national level HIS implementation evaluation.


2019 ◽  
Vol 28 (05) ◽  
pp. 1950019 ◽  
Author(s):  
Nicolás Torres ◽  
Marcelo Mendoza

Clustering-based recommender systems bound the seek of similar users within small user clusters providing fast recommendations in large-scale datasets. Then groups can naturally be distributed into different data partitions scaling up in the number of users the recommender system can handle. Unfortunately, while the number of users and items included in a cluster solution increases, the performance in terms of precision of a clustering-based recommender system decreases. We present a novel approach that introduces a cluster-based distance function used for neighborhood computation. In our approach, clusters generated from the training data provide the basis for neighborhood selection. Then, to expand the search of relevant users, we use a novel measure that can exploit the global cluster structure to infer cluster-outside user’s distances. Empirical studies on five widely known benchmark datasets show that our proposal is very competitive in terms of precision, recall, and NDCG. However, the strongest point of our method relies on scalability, reaching speedups of 20× in a sequential computing evaluation framework and up to 100× in a parallel architecture. These results show that an efficient implementation of our cluster-based CF method can handle very large datasets providing also good results in terms of precision, avoiding the high computational costs involved in the application of more sophisticated techniques.


2019 ◽  
Vol 11 (21) ◽  
pp. 5876
Author(s):  
Woongkyoo Bae ◽  
UnHyo Kim ◽  
Jeongwoo Lee

Since the 1970s, the South Korean government has been redeveloping blighted residential environments and adopting large-scale redevelopment policies to solve urban housing-related problems. However, it is difficult to designate areas for redevelopment and identify areas where redevelopment is currently unfeasible. This study establishes a framework to support decision-making in a selection of housing renewal districts. The proposed Residential Environment Maintenance Index (REMI) overcomes the limitations of existing indicators, which are often biased toward physical requirements. Using this, we rationalize the designation of maintenance areas by considering both physical and social requirements and outline the renewal district designation procedure. To derive REMI, we used an analytic hierarchy process analysis and estimated the index’s reliability by clarifying the relative importance and priority of the indicators based on surveys of 300 subject matter experts. We analyzed various simulations by applying REMI at sites where maintenance is currently planned or discharged in Seoul. These reveal that the total number of urban renewal projects can be adjusted by adjusting the number of renewal district designations through the proposed REMI according to the economic situation. The results have implications for understanding REMI’s possible application and flexible management at the administrative level to pursue long-term sustainable development.


2020 ◽  
Author(s):  
Paul Kim ◽  
Daniel Partridge ◽  
James Haywood

<p>Global climate model (GCM) ensembles still produce a significant spread of estimates for the future of climate change which hinders our ability to influence policymakers. The range of these estimates can only partly be explained by structural differences and varying choice of parameterisation schemes between GCMs. GCM representation of cloud and aerosol processes, more specifically aerosol microphysical properties, remain a key source of uncertainty contributing to the wide spread of climate change estimates. The radiative effect of aerosol is directly linked to the microphysical properties and these are in turn controlled by aerosol source and sink processes during transport as well as meteorological conditions.</p><p>A Lagrangian, trajectory-based GCM evaluation framework, using spatially and temporally collocated aerosol diagnostics, has been applied to over a dozen GCMs via the AeroCom initiative. This framework is designed to isolate the source and sink processes that occur during the aerosol life cycle in order to improve the understanding of the impact of these processes on the simulated aerosol burden. Measurement station observations linked to reanalysis trajectories are then used to evaluate each GCM with respect to a quasi-observational standard to assess GCM skill. The AeroCom trajectory experiment specifies strict guidelines for modelling groups; all simulations have wind fields nudged to ERA-Interim reanalysis and all simulations use emissions from the same inventories. This ensures that the discrepancies between GCM parameterisations are emphasised and differences due to large scale transport patterns, emissions and other external factors are minimised.</p><p>Preliminary results from the AeroCom trajectory experiment will be presented and discussed, some of which are summarised now. A comparison of GCM aerosol particle number size distributions against observations made by measurement stations in different environments will be shown, highlighting the difficulties that GCMs have at reproducing observed aerosol concentrations across all size ranges in pristine environments. The impact of precipitation during transport on aerosol microphysical properties in each GCM will be shown and the implications this has on resulting aerosol forcing estimates will be discussed. Results demonstrating the trajectory collocation framework will highlight its ability to give more accurate estimates of the key aerosol sources in GCMs and the importance of these sources in influencing modelled aerosol-cloud effects. In summary, it will be shown that this analysis approach enables us to better understand the drivers behind inter-model and model-observation discrepancies.</p>


2007 ◽  
Vol 104 (30) ◽  
pp. 12259-12264 ◽  
Author(s):  
C. G. Knight ◽  
S. H. E. Knight ◽  
N. Massey ◽  
T. Aina ◽  
C. Christensen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document