- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Lossy color image compression based on quantization
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Lossy color image compression based on quantization Shahid, Hiba
Abstract
Significant improvements that are based on quantization can be made in the area of lossy image compression. In this thesis, we propose two effective quantization based algorithms for compressing two-dimensional RGB images. The first method is based on a new codebook generation algorithm which, unlike existing VQ methods. It forms the codewords directly in one step, using the less common local information in every image in the training set. Existing methods form an initial codebook and update it iteratively, by taking the average of vectors representing small image regions. Another distinguishing aspect of this method is the use of non-overlapping hexagonal blocks instead of the traditional rectangular blocks, to partition the images. We demonstrate how the codewords are extracted from such blocks. We also measure the separate contribution of each of the two distinguishing aspects of the proposed codebook generation algorithm. The second proposed method, unlike all known VQ algorithms, does not use training images or a codebook. It is implemented to further improve the image quality resulting from the proposed codebook generation algorithm. In this algorithm, the representative pixels (i.e. those that represent pixels that are perceptually similar) are extracted from sets of perceptually similar color pixels. Also, the image pixels are reconstructed using the closest representative pixels. There are two main differences between this algorithm and the existing lossy compression algorithms including our proposed codebook generation algorithm. The first is that this algorithm exploits the image pixel correlation according to the viewer’s perception and not according to how close it is numerically to its neighboring pixels. The second difference is that this algorithm does not reconstruct an entire image block simultaneously using a codeword. Each pixel is reconstructed such that it has the value of its closest representative pixel.
Item Metadata
Title |
Lossy color image compression based on quantization
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2015
|
Description |
Significant improvements that are based on quantization can be made in the area of lossy image compression. In this thesis, we propose two effective quantization based algorithms for compressing two-dimensional RGB images. The first method is based on a new codebook generation algorithm which, unlike existing VQ methods. It forms the codewords directly in one step, using the less common local information in every image in the training set. Existing methods form an initial codebook and update it iteratively, by taking the average of vectors representing small image regions. Another distinguishing aspect of this method is the use of non-overlapping hexagonal blocks instead of the traditional rectangular blocks, to partition the images. We demonstrate how the codewords are extracted from such blocks. We also measure the separate contribution of each of the two distinguishing aspects of the proposed codebook generation algorithm. The second proposed method, unlike all known VQ algorithms, does not use training images or a codebook. It is implemented to further improve the image quality resulting from the proposed codebook generation algorithm. In this algorithm, the representative pixels (i.e. those that represent pixels that are perceptually similar) are extracted from sets of perceptually similar color pixels. Also, the image pixels are reconstructed using the closest representative pixels. There are two main differences between this algorithm and the existing lossy compression algorithms including our proposed codebook generation algorithm. The first is that this algorithm exploits the image pixel correlation according to the viewer’s perception and not according to how close it is numerically to its neighboring pixels. The second difference is that this algorithm does not reconstruct an entire image block simultaneously using a codeword. Each pixel is reconstructed such that it has the value of its closest representative pixel.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2015-10-24
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivs 2.5 Canada
|
DOI |
10.14288/1.0165803
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2015-11
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivs 2.5 Canada