scholarly journals High Throughput and Low Cost Architecture for the Forward Quantization of the H.264/AVC Video Compression Standard

2010 ◽  
Vol 13 (3) ◽  
Author(s):  
Felipe Sampaio ◽  
Daniel Palomino ◽  
Robson Dornelles ◽  
Luciano Agostini

This work presents a dedicated hardware design for the Forward Quantization Module (Q module) of the H.264/AVC Video Coding Standard, using optimized multipliers. The goal of this design is to achieve high throughput rates combined with low hardware consumption. The architecture was described in VHDL and synthesized to the EP2S60F1020C3 Altera Stratix II FPGA and to the TSMC 0.18μm Standard Cell technology. The architecture is able to operate at 364.2 MHz as a maximum operation frequency. At this frequency, the architecture is able to process 117 QHDTV frames (3840x2048 pixels) per second. The designed architecture can be used in low power and low cost applications, since it can process high resolution in real time even with very low operation frequencies and with low hardware consumption. In the comparison with related works, the designed Q module achieves the best results of throughput and hardware consumption.

2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


Author(s):  
Laura Falaschetti ◽  
Lorenzo Manoni ◽  
Romel Calero Fuentes Rivera ◽  
Danilo Pau ◽  
Gianfranco Romanazzi ◽  
...  

2012 ◽  
Vol 10 (3) ◽  
pp. 329-334 ◽  
Author(s):  
D.M. Valero-Hervás ◽  
P. Morales ◽  
M.J. Castro ◽  
P. Varela ◽  
M. Castillo-Rama ◽  
...  

“Slow” and “Fast” C3 complement variants (C3S and C3F) result from a g.304C>G polymorphism that changes arginine to glycine at position 102. C3 variants are associated with complement-mediated diseases and outcome in transplantation. In this work C3 genotyping is achieved by a Real Time PCR - High Resolution Melting (RT-PCR-HRM) optimized method. In an analysis of 49 subjects, 10.2% were C3FF, 36.7% were C3SF and 53.1% were C3SS. Allelic frequencies (70% for C3S and 30% for C3F) were in Hardy-Weinberg equilibrium and similar to those published previously. When comparing RT-PCR-HRM with the currently used Tetraprimer-Amplification Refractory Mutation System PCR (T-ARMS-PCR), coincidence was 93.8%. The procedure shown here includes a single primer pair and low DNA amount per reaction. Detection of C3 variants by RT-PCR-HRM is accurate, easy, fast and low cost, and it may be the method of choice for C3 genotyping.


Author(s):  
Jin Woo Park ◽  
Hyokeun Lee ◽  
Boyeal Kim ◽  
Dong-Goo Kang ◽  
Seung Oh Jin ◽  
...  

Leonardo ◽  
2012 ◽  
Vol 45 (4) ◽  
pp. 322-329 ◽  
Author(s):  
Byron Lahey ◽  
Winslow Burleson ◽  
Elizabeth Streb

Translation is a multimedia dance performed on a vertical wall filled with the projected image of a lunar surface. Pendaphonics is a low-cost, versatile, and robust motion-sensing hardware-software system integrated with the rigging of Translation to detect the dancers' motion and provide real-time control of the virtual moonscape. Replacing remotely triggered manual cues with high-resolution, real-time control by the performers expands the expressive range and ensures synchronization of feedback with the performers' movements. This project is the first application of an ongoing collaboration between the Motivational Environments Research Group at Arizona State University (ASU) and STREB Extreme Action Company.


Author(s):  
Sangho Choe ◽  
Jeong-Hwa Yoo ◽  
Ponsuge Surani Shalika Tissera ◽  
Jo-In Kang ◽  
Hee-Kyung Yang

Sign in / Sign up

Export Citation Format

Share Document