An efficient pass-parallel architecture for embedded block coder in JPEG 2000

2017 ◽  
Vol 16 (5) ◽  
pp. 1595-1606 ◽  
Author(s):  
Refka Ghodhbani ◽  
Taoufik Saidani ◽  
Layla Horrigue ◽  
Mohamed Atri
2007 ◽  
Vol 9 (6) ◽  
pp. 1103-1112 ◽  
Author(s):  
Yu-Wei Chang ◽  
Hung-Chi Fang ◽  
Chun-Chia Chen ◽  
Chung-Jr Lian ◽  
Liang-Gee Chen

Paper This paper presents a hardware architecture for image-adaptive watermarking in the wavelet domain. The embedding strength factor is selected by calculating the energy present between the different frequency bands. The current algorithm is constructed on a CDF 5/3 wavelet based on the model of lossless compression JPEG 2000. Wavelet filters are implemented using a parallel architecture with a lifting scheme, which makes them more efficient in terms of speed and hardware utilization. The top module of the system is built with the combination of serial-parallel architecture to balance the speed and power consumption. The presented watermarking system is tested using hardware in the loop-testing technique. The objective is to develop an image-adaptive, real time, low power consumption and robust watermarking system, which can be incorporated into existing hardware such as digital cameras, scanners, and camcorders. The watermarking system's efficiency against different assaults has been evaluated using the StirMark software. The proposed watermarking system showed robustness against most of the geometric and non-geometric attacks.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document