average throughput
Recently Published Documents


TOTAL DOCUMENTS

81
(FIVE YEARS 36)

H-INDEX

6
(FIVE YEARS 2)

2022 ◽  
Vol 15 (1) ◽  
pp. 1-21
Author(s):  
Chen Wu ◽  
Mingyu Wang ◽  
Xinyuan Chu ◽  
Kun Wang ◽  
Lei He

Low-precision data representation is important to reduce storage size and memory access for convolutional neural networks (CNNs). Yet, existing methods have two major limitations: (1) requiring re-training to maintain accuracy for deep CNNs and (2) needing 16-bit floating-point or 8-bit fixed-point for a good accuracy. In this article, we propose a low-precision (8-bit) floating-point (LPFP) quantization method for FPGA-based acceleration to overcome the above limitations. Without any re-training, LPFP finds an optimal 8-bit data representation with negligible top-1/top-5 accuracy loss (within 0.5%/0.3% in our experiments, respectively, and significantly better than existing methods for deep CNNs). Furthermore, we implement one 8-bit LPFP multiplication by one 4-bit multiply-adder and one 3-bit adder, and therefore implement four 8-bit LPFP multiplications using one DSP48E1 of Xilinx Kintex-7 family or DSP48E2 of Xilinx Ultrascale/Ultrascale+ family, whereas one DSP can implement only two 8-bit fixed-point multiplications. Experiments on six typical CNNs for inference show that on average, we improve throughput by over existing FPGA accelerators. Particularly for VGG16 and YOLO, compared to six recent FPGA accelerators, we improve average throughput by 3.5 and 27.5 and average throughput per DSP by 4.1 and 5 , respectively.


2021 ◽  
Vol 67 (No. 4) ◽  
pp. 171-180
Author(s):  
Samuel Kojo Ahorsu ◽  
Hayford Ofori ◽  
Ernest Kumah ◽  
Maxwell Budu ◽  
Cephas Kwaku Bosrotsi ◽  
...  

The objective of this research was to design, construct and evaluate a variable chipping clearance cassava chipper for processors which will produce uniform and varying cassava chip geometry for multipurpose usage. It consists of a drive shaft with varying chipping clearances (6, 18, and 28 mm) to produce varied chip geometry. The average throughput capacity of the chipper was found to be 475.5 kg·h<sup>–1</sup> at a speed range of 460–800 rpm with a chipping clearance of 6–28 mm. The average chipping efficiency ranges from a minimum–maximum of 76.6–99.4% for the selected operational speeds and chipping clearances. The chipping capacity and the output to input ratio is dependent on the operational speeds and chipping clearances of the machine.


Author(s):  
Jingru Tan ◽  
Wenbo Guan

Aiming at the problem of huge energy consumption in the Fog Wireless Access Networks (F-RANs), the resource allocation scheme of the F-RAN architecture under the cooperation of renewable energy is studied in this paper. Firstly, the transmission model and Energy Harvesting (EH) model are established, the solar energy harvester is installed on each Fog Access Point (F-AP), and each F-AP is connected to the smart grid. Secondly, the optimization problem is established according to the constraints of Signal to Noise Ratio (SNR), available bandwidth and energy harvesting, so as to maximize the average throughput of F-RAN architecture with hybrid energy sources. Finally, the dynamic power allocation scheme in the network is studied by using Q-learning and Deep Q Network (DQN) respectively. Simulation results show that the proposed two algorithms can improve the average throughput of the whole network compared with other traditional algorithms.


2021 ◽  
Author(s):  
Pantelis-Daniel Arapoglou ◽  
Giulio Colavolpe ◽  
Tommaso Foggi ◽  
Nicolò Mazzali ◽  
Armando Vannucci

In the frame of ongoing efforts between space agencies to define an on-off-keying-based optical low-Earth-orbit (LEO) direct-to-Earth (DTE) waveform, this paper offers an in-depth analysis of the Variable Data Rate (VDR) technique. VDR, in contrast to the currently adopted Constant Data Rate (CDR) approach, enables the optimization of the average throughput during a LEO pass over the optical ground station (OGS). The analysis addresses both critical link level aspects, such as receiver (time, frame, and amplitude) synchronization, as well as demonstrates the benefits stemming from employing VDR at system level. This was found to be around 100% compared to a CDR transmission approach.


2021 ◽  
Author(s):  
Pantelis-Daniel Arapoglou ◽  
Giulio Colavolpe ◽  
Tommaso Foggi ◽  
Nicolò Mazzali ◽  
Armando Vannucci

In the frame of ongoing efforts between space agencies to define an on-off-keying-based optical low-Earth-orbit (LEO) direct-to-Earth (DTE) waveform, this paper offers an in-depth analysis of the Variable Data Rate (VDR) technique. VDR, in contrast to the currently adopted Constant Data Rate (CDR) approach, enables the optimization of the average throughput during a LEO pass over the optical ground station (OGS). The analysis addresses both critical link level aspects, such as receiver (time, frame, and amplitude) synchronization, as well as demonstrates the benefits stemming from employing VDR at system level. This was found to be around 100% compared to a CDR transmission approach.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1267
Author(s):  
Yong Liu ◽  
Bing Li ◽  
Yan Zhang ◽  
Xia Zhao

With the developments of Internet of Things (IoT) and cloud-computing technologies, cloud servers need storage of a huge volume of IoT data with high throughput and robust security. Joint Compression and Encryption (JCAE) scheme based on Huffman algorithm has been regarded as a promising technology to enhance the data storage method. Existing JCAE schemes still have the following limitations: (1) The keys in the JCAE would be cracked by physical and cloning attacks; (2) The rebuilding of Huffman tree reduces the operational efficiency; (3) The compression ratio should be further improved. In this paper, a Huffman-based JCAE scheme using Physical Unclonable Functions (PUFs) is proposed. It provides physically secure keys with PUFs, efficient Huffman tree mutation without rebuilding, and practical compression ratio by combining the Lempel-Ziv and Welch (LZW) algorithm. The performance of the instanced PUFs and the derived keys was evaluated. Moreover, our scheme was demonstrated in a file protection system with the average throughput of 473Mbps and the average compression ratio of 0.5586. Finally, the security analysis shows that our scheme resists physical and cloning attacks as well as several classic attacks, thus improving the security level of existing data protection methods.


2021 ◽  
Author(s):  
Gang Wu

A downlink scheduling scheme called coordinated location dependent downlink scheduling scheme (CLDSS), that combines the intra-cell power allocation and inter-cell transmission coordination is proposed to be used in TD-CDMA networks. In the proposed scheme, each cell in the cellular network is partitioned into co-centric areas based on the load distribution in the cell. The transmissions from base stations are controlled based on the intra-cell load as well as coordinated to minimize inter-cell interference. The average throughput employing the CLDSS is analyzed for a 2-(square) cell system in shallow fading environment with 2 (square) partitioned areas for each cell. Simulation study is also done to validate the numerical results obtained from the analytical study. It is shown that CLDSS scheme can provide soft throughput, i.e. the average throughput remains relatively invariant with the number of users and also provide good performance even in the non-uniform user distribution within a cell. The CLDSS scheme can also improve the fairness in terms of achievable throughput to users anywhere in the cell.


Sign in / Sign up

Export Citation Format

Share Document