application specific
Recently Published Documents


TOTAL DOCUMENTS

2606
(FIVE YEARS 364)

H-INDEX

47
(FIVE YEARS 7)

2022 ◽  
Vol 15 (2) ◽  
pp. 1-35
Author(s):  
Atakan Doğan ◽  
Kemal Ebcioğlu

Hardware-accelerated cloud computing systems based on FPGA chips (FPGA cloud) or ASIC chips (ASIC cloud) have emerged as a new technology trend for power-efficient acceleration of various software applications. However, the operating systems and hypervisors currently used in cloud computing will lead to power, performance, and scalability problems in an exascale cloud computing environment. Consequently, the present study proposes a parallel hardware hypervisor system that is implemented entirely in special-purpose hardware, and that virtualizes application-specific multi-chip supercomputers, to enable virtual supercomputers to share available FPGA and ASIC resources in a cloud system. In addition to the virtualization of multi-chip supercomputers, the system’s other unique features include simultaneous migration of multiple communicating hardware tasks, and on-demand increase or decrease of hardware resources allocated to a virtual supercomputer. Partitioning the flat hardware design of the proposed hypervisor system into multiple partitions and applying the chip unioning technique to its partitions, the present study introduces a cloud building block chip that can be used to create FPGA or ASIC clouds as well. Single-chip and multi-chip verification studies have been done to verify the functional correctness of the hypervisor system, which consumes only a fraction of (10%) hardware resources.


2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Senfeng Zeng ◽  
Chunsen Liu ◽  
Xiaohe Huang ◽  
Zhaowu Tang ◽  
Liwei Liu ◽  
...  

AbstractWith the rapid development of artificial intelligence, parallel image processing is becoming an increasingly important ability of computing hardware. To meet the requirements of various image processing tasks, the basic pixel processing unit contains multiple functional logic gates and a multiplexer, which leads to notable circuit redundancy. The pixel processing unit retains a large optimizing space to solve the area redundancy issues in parallel computing. Here, we demonstrate a pixel processing unit based on a single WSe2 transistor that has multiple logic functions (AND and XNOR) that are electrically switchable. We further integrate these pixel processing units into a low transistor-consumption image processing array, where both image intersection and image comparison tasks can be performed. Owing to the same image processing power, the consumption of transistors in our image processing unit is less than 16% of traditional circuits.


Author(s):  
Filippo Mele

AbstractThe increasing demand for performance improvements in radiation detectors, driven by cutting-edge research in nuclear physics, astrophysics and medical imaging, is causing not only a proliferation in the variety of the radiation sensors, but also a growing necessity of tailored solutions for the front-end readout electronics. Within this work, novel solutions for application specific integrated circuits (ASICs) adopted in high-resolution X and $$\upgamma $$ γ  ray spectroscopy applications are studied. In the first part of this work, an ultra-low noise charge sensitive amplifier (CSA) is presented, with specific focus on sub-microsecond filtering, addressing the growing interest in high-luminosity experiments. The CSA demonstrated excellent results with Silicon Drift Detectors (SDDs), and with room temperature Cadmium-Telluride (CdTe) detectors, recording a state-of-the-art noise performance. The integration of the CSA within two full-custom radiation detection instruments realized for the ELETTRA (Trieste, Italy) and SESAME (Allan, Jordan) synchrotrons is also presented. In the second part of this work, an ASIC constellation designed for X-Gamma imaging spectrometer (XGIS) onboard of the THESEUS space mission is described. The presented readout ASIC has a highly customized distributed architecture, and integrates a complete on-chip signal filtering, acquisition and digitization with an ultra-low power consumption.


Author(s):  
Hadise Ramezani ◽  
Majid Mohammadi ◽  
Amir Sabbagh Molahosseini

The approximate computing is an alternative computing approach which can lead to high-performance implementation of audio and image processing as well as deep learning applications. However, most of the available approximate adders have been designed using application specific integrated circuits (ASICs), and they would not result in an efficient implementation on field programmable gate arrays (FPGAs). In this paper, we have designed a new approximate adder customized for efficient implementation on FPGAs, and then it has been used to build the Gaussian filter. The experimental results of the implementation of Gaussian filter based on the proposed approximate adder on a Virtex-7 FPGA, indicated that the resource utilization has decreased by 20-51%, and the designed filter delay based on the modified design methodology for building approximate adders for FPGA-based systems (MDeMAS) adder has improved 10-35%, due to the obtained output quality.


2022 ◽  
Vol 354 ◽  
pp. 00005
Author(s):  
Florin Ionel Burdea ◽  
Monica Crinela Burdea

Industrial explosive storage sites are considered to be areas of major risk to industrial, public and occupational safety and security, due to the possibility of major accidents, due to the nature of the explosive substances and due to the serious consequences of an explosion.on these sites. The explosion risk assessment for explosives depots requires an analysis of all possible occurrences of the initiating events that could lead to a potential explosion, followed by an analysis of security measures, all of which are quantified by the development of accident trees and sequences. for each possible trigger. This paper presents the principles of designing a specialized computer application in the field of explosion risk management at explosives depots for civilian use. This application allows to ensure the necessary premises for the elaboration, in objective and specific conditions, of the necessary documents for these types of technical infrastructures, from their design phase and the quantification of the degree of damage on the analyzed locations but also in the areas that are located.


2021 ◽  
pp. 519-527
Author(s):  
M. H. Sargolzaei

Application-Specific Instruction-Set Processors (ASIPs) have established their processing power in the embedded systems. Since energy efficiency is one of the most important challenges in this area, coarse-grained reconfigurable arrays (CGRAs) have been used in many different domains. The exclusive program execution model of the CGRAs is the key to their energy efficiency but it has some major costs. The context-switching network (CSN) is responsible for handling this unique program execution model and is also one of the most energy-hungry parts of the CGRAs. In this paper, we have proposed a new method to predict important architectural parameters of the CSN of a CGRA, such as the size of the processing elements (PEs), the topology of the CSN, and the number of configuration registers in each PE. The proposed method is based on the high-level code of the input application, and it is used to prune the design space and increase the energy efficiency of the CGRA. Based on our results, not only the size of the design space of the CSN of the CGRA is reduced to 10%, but also its performance and energy efficiency are increased by about 13% and 73%, respectively. The predicted architecture by the proposed method is over 97% closer to the best architecture of the exhaustive searching for the design space.


2021 ◽  
Vol 72 ◽  
pp. 1471-1505
Author(s):  
Rodothea Myrsini Tsoupidi ◽  
Roberto Castañeda Lozano ◽  
Benoit Baudry

Modern software deployment process produces software that is uniform, and hence vulnerable to large-scale code-reuse attacks, such as Jump-Oriented Programming (JOP) attacks. Compiler-based diversification improves the resilience and security of software systems by automatically generating different assembly code versions of a given program. Existing techniques are efficient but do not have a precise control over the quality, such as the code size or speed, of the generated code variants.  This paper introduces Diversity by Construction (DivCon), a constraint-based compiler approach to software diversification. Unlike previous approaches, DivCon allows users to control and adjust the conflicting goals of diversity and code quality. A key enabler is the use of Large Neighborhood Search (LNS) to generate highly diverse assembly code efficiently. For larger problems, we propose a combination of LNS with a structural decomposition  of the problem. To further improve the diversification efficiency of DivCon against JOP attacks, we propose an application-specific distance measure tailored to the characteristics of JOP attacks.  We evaluate DivCon with 20 functions from a popular benchmark suite for embedded systems. These experiments show that DivCon's combination of LNS and our application-specific distance measure generates binary programs that are highly resilient against JOP  attacks (they share between 0.15% to 8% of JOP gadgets) with an optimality gap of 10%. Our results confirm that there is a trade-off between the quality of each assembly code version and the diversity of the entire pool of versions. In particular, the experiments  show that DivCon is able to generate binary programs that share a very small number of  gadgets, while delivering near-optimal code.  For constraint programming researchers and practitioners, this paper demonstrates that LNS is a valuable technique for finding diverse solutions. For security researchers and software  engineers, DivCon extends the scope of compiler-based diversification to performance-critical and resource-constrained applications.  


2021 ◽  
pp. 0148558X2110642
Author(s):  
Thomas W. Hall ◽  
Lucas A. Hoogduin ◽  
Bethane Jo Pierce ◽  
Jeffrey J. Tsay

Despite technological advances in accounting systems and audit techniques, sampling remains a commonly used audit tool. For critical estimation applications involving low error rate populations, stratified mean-per-unit sampling (SMPU) has the unique advantage of producing trustworthy confidence intervals. However, SMPU is less efficient than other classical sampling techniques because it requires a larger sample size to achieve comparable precision. To address this weakness, we investigated how SMPU efficiency can be improved via three key design choices: (a) stratum boundary selection method, (b) number of sampling strata, and (c) minimum stratum sample size. Our tests disclosed that SMPU efficiency varies significantly with stratum boundary selection method. An iterative search-based method yielded the best efficiency, followed by the Dalenius–Hodges and Equal-Value-Per-Stratum methods. We also found that variations in Dalenius–Hodges implementation procedures yielded meaningful differences in efficiency. Regardless of boundary selection method, increasing the number of sampling strata beyond levels recommended in the professional literature yielded significant improvements in SMPU efficiency. Although a minor factor, smaller values of minimum stratum sample size were found to yield better SMPU efficiency. Based on these findings, suggestions for improving SMPU efficiency are provided. We also present the first known equations for planning the number of sampling strata given various application-specific parameters.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 107
Author(s):  
Carlos Abellan Abellan Beteta ◽  
Dimitra Andreou ◽  
Marina Artuso ◽  
Andy Beiter ◽  
Steven Blusk ◽  
...  

SALT, a new dedicated readout Application Specific Integrated Circuit (ASIC) for the Upstream Tracker, a new silicon detector in the Large Hadron Collider beauty (LHCb) experiment, has been designed and developed. It is a 128-channel chip using an innovative architecture comprising a low-power analogue front-end with fast pulse shaping and a 40 MSps 6-bit Analog-to-Digital Converter (ADC) in each channel, followed by a Digital Signal Processing (DSP) block performing pedestal and Mean Common Mode (MCM) subtraction and zero suppression. The prototypes of SALT were fabricated and tested, confirming the full chip functionality and fulfilling the specifications. A signal-to-noise ratio of about 20 is achieved for a silicon sensor with a 12 pF input capacitance. In this paper, the SALT architecture and measurements of the chip performance are presented.


2021 ◽  
Author(s):  
Ian Moffat

The detection and mapping of unmarked graves is a significant focus of many archaeological and forensic investigations however traditional methods such as probing, forensic botany, cadaver dogs or dowsing are often ineffective, slow to cover large areas or excessively invasive. Geophysics offers an appealing alternative suitable for the rapid non invasive investigation of large areas. Unfortunately graves are a challenging target with no diagnostic geophysical response and so the use of a rigorous application-specific methodology is essential for a successful outcome. The most important inclusions in a successful survey methodology include ultrahigh density data, the use of multiple geophysical techniques to validate results based on several physical properties, excellent quality positioning and intensive site recording. Regardless of the methodology applied, geophysics should not be considered a panacea for locating all graves on all sites but should be used as an integral part of a comprehensive survey strategy.


Sign in / Sign up

Export Citation Format

Share Document