Digital Image Processing Chapter 8: Image Compression

Digital Image Processing Chapter 8: Image Compression Data vs Information Information = Matter (สาระ) Data = The means by which information is conve...
Author: Alban Robinson
16 downloads 0 Views 4MB Size
Digital Image Processing Chapter 8: Image Compression

Data vs Information Information = Matter (สาระ) Data = The means by which information is conveyed

Image Compression Reducing the amount of data required to represent a digital image while keeping information as much as possible

Relative Data Redundancy and Compression Ratio R l i Data Relative D Redundancy R d d

1 RD  1  CR C Compression i Ratio R ti

n1 CR  n2 Types of data redundancy 1. Codingg redundancy y 2. Interpixel redundancy 3. Psychovisual y redundancy y

Coding Redundancy Different coding methods yield different amount of data needed to represent the same information.

Example of Coding Redundancy : Variable Length Coding vs. vs Fixed Length Coding

Lavg 3 bits/symbol

Lavg 2.7 bits/symbol (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Variable Length Coding

Concept: assign the longest code word to the symbol with the least probability of occurrence.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Interpixel Redundancy (Images from Rafael C. Gonzalez and Richard E. W d Digital Wood, Di it l Image I Processing, P i 2nd Edition. Editi

Interpixel redundancy: Parts of an image are highly correlated. In other words,we can ppredict a ggiven ppixel from its neighbor.

Run Length Coding The gray scale image of size 343x1024 pixels Binaryy image g = 343x1024x1 = 351232 bits

Line No. 100 Run length coding Line 100: (1,63) (0,87) (1,37) (0,5) (1,4) (0,556) (1,62) (0,210) Total 12166 runs, each run use 11 bits  Total = 133826 Bits (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Psychovisual Redundancy 8-bit ggrayy scale image

4-bit ggrayy scale image

4-bit IGS image

False contours

The eye does Th d nott response with ith equall sensitivity iti it to t all ll visual i l information. (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Improved Gray Scale Quantization Pixel i1 i-1 i i+1 i+2 i+3

Gray level N/A 0110 1100 1000 1011 1000 0111 1111 0100

+

Sum 0000 0000 0110 1100 1001 0111 1000 1110 1111 0100

IGS Code N/A 0110 1001 1000 1111

Algorithm 1. Add the least significant 4 bits of the previous value of Sum to the 8-bit current pixel. If the most significant 4 bit of the pixel is 1111 then add 0000 instead. Keep the result in Sum 2. Keep only the most significant 4 bits of Sum for IGS code.

Fidelity Criteria: how good is the compression algorithm -Objective Fidelity Criterion - RMSE, PSNR -Subjective Fidelity Criterion: -Human Rating

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Image Compression Models

f ( x, y )

Reduce R d ddata t redundancy

IIncrease noise i immunity

Source encoder

Channel encoder

Noise

fˆ ( x, y )

Source Sou ce decoder decode

Channel

Channel C e decoder decode

Source Encoder and Decoder Models Source encoder

f ( x, y )

M Mapper Reduce interpixel redundancy d d

Q ti Quantizer

Reduce psychovisual redundancy

S b l encoder Symbol d

Reduce coding redundancy

Source decoder

fˆ ( x, y )

Inverse mapper

Symbol decoder

Channel Encoder and Decoder - Hamming code, Turbo code, …

Information Theory Measuring information

 1  I ( E )  log    logP( E )   P( E ) 

Entropy or Uncertainty: Average information per symbol

H    P( a j ) log( P( a j )) j

Simple Information System

Binary Symmetric Channel Source A = {a1, a2} ={0, ={0 1} z = [P(a1), P(a2)] P(a1)

Destination B = {b1,bb2} ={0, ={0 1} v = [P(b1), P(b2)] 0

Pe= probability of error

0 P(a1)(1-Pe)+(1-P(a1))Pe

Pe

Source 1-P(a1)

(1 Pe) (1-P

1

Destination

Pe

1 (1-Pe)

(1-P(a1))(1-Pe)+P(a1)Pe (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Binary Symmetric Channel Source A = {a1, a2} ={0, 1} z = [[P(a ( 1), P(a ( 2)]

Destination B = {b1,b2} ={0, 1} v = [[P(b (b1), P(b (b2)] H(z|b1) = - P(a1|b1)log2P(a1|b1) - P(a ( 2||b1))logg2P(a ( 2||b1)

H(z) = - P(a1)log2P(a1) - P(a2)log2P(a2)

H(z|b2) = - P(a1|b2)log2P(a1|b2) ( 2||b2))logg2P(a ( 2||b2) - P(a H(z|v) = H(z|b1) + H(z|b2) Mutual information Capacity

I(z,v)=H(z) - H(z|v)

C  maxI ( z, v ) z

Binary Symmetric Channel Let pe = probability of error  p( a1 )   pbs  z    1  p( a1 )  pbs  pe   pbs  1  pe v  p  p 1  p e   bs   e

H ( z )   pbs log2 ( pbs )  (1  pbs ) log2 (1  p bs ) H ( z | v )   pbs (1  pe ) log2 ( pbs (1  pe ))  (1  pbs ) pe log2 ((1  p bs ) pe )  (1  pbs )(1  pe ) log g2 ((1  pbs )(1  pe ))  pbs pe log g2 ( p bs pe )

I ( z, v )  H bs ( pbs pe  pbs pe )  H bs ( pe ) C  1  H bs ( pe )

Binary Symmetric Channel

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Communication System Model

2 Cases to be considered: Noiseless and noisy

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Noiseless Coding Theorem Problem: bl How to code d data d as compact as possible? ibl Shannon s first theorem: defines the minimum average code word Shannon’s length per source that can be achieved. Let source be {A, z} which is zero memory source with J symbols. (zero memory = each outcome is independent from other outcomes) then a set of source output of n element be A  {1 ,  2 ,  3 ,... J n } Example:

A  {0,1} for n = 3, A  {000,001,010,011,100,101,110,111}

Noiseless Coding Theorem (cont.) Probability b bili off eachh j is i P( j )  P( a j1 ) P ( a j 2 ) P( a jn ) Entropy of source : jn

H ( z)    P (i ) log( l ( P (i ))  nH H ( z) i 1

Each code word length l(i) can be log

1 1  l (i )  log 1 P(i ) P(i )

Th average code Then d wordd length l th can be b Jn

Jn

Jn

1 1 P( i) log   P( i)l (i )   P( i) log 1  P(i ) i 1 P(i ) i 1 i 1

Noiseless Coding Theorem (cont.) W gett We

  H ( z)  1 H ( z)  Lavg

from

H ( z)  nH ( z )

then H ( z)  or

 Lavg n

 H ( z) 

   Lavg lim   H ( z)  n   n 

C di efficiency Coding ffi i

n

H ( z)  Lavg

1 n The minimum average code word length g pper source symbol y cannot lower than the entropy.

Extension Coding Example

H = 0.918 Lavg = 1 H = 1.83 Lavg = 1.89

1  1

0.918  0.918 1

2 

1.83  0.97 1.89

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Noisy Coding Theorem Problem: bl How to code d ddata as reliable li bl as possible? ibl Example: Repeat each code 3 times: Source data = {1,0,0,1,1}

Data to be sent = {111,000,000,111,111} Shannon’s second theorem: the maximum rate of coded information is R  log  = code size r = Block length

 r

Rate Distortion Function for BSC

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Error--Free Compression: Huffman Coding Error Huffman coding: give the smallest possible number of code symbols per source symbols. Step 1: Source reduction

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Error--Free Compression: Huffman Coding Error Step 2: Code assignment procedure

The code is instantaneous uniquely decodable without referencing succeeding symbols.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Near Optimal Variable Length Codes

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Arithmetic Coding Nonblock code: one-to-one one to one correspondence between source symbols And code words does not exist. Concept: The entire sequences of source symbols is assigned a single arithmetic code word in the form of a number in an interval of real number between 0 and 1.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Arithmetic Coding Example 0.2x0.4 0.04 0.04+0.8x0.04 0.8x0.04

0.056+0.8x0.016 0.056 0.8x0.016 The number between 0.0688 and 0.06752 can be b usedd to t represent the sequence a1 a2 a3 a3 a4

0.2x0.2 0.04+0.4x0.04

0.056+0.4x0.016

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

LZW Coding Lempel Ziv Welch coding : assign fixed length code words to Lempel-Ziv-Welch variable length sequences of source symbols.

24 Bits 9 Bits (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

LZW Coding Algorithm 0. Initialize a dictionary by all possible gray values (0-255) 0 (0 255) 1. Input current pixel 2. If the current pixel combined with previous pixels form one of existing dictionary entries Then 2.1 Move to the next pixel and repeat Step 1 Else 2 2 Output the dictionary location of the currently recognized 2.2 sequence (which is not include the current pixel) 2.3 Create a new dictionary entry by appending the currently recognized sequence in 2.2 with the current pixel 2.4 Move to the next pixel and repeat Step 1

LZW Coding Example Dictionaryy Location Entry 0 0 1 1 … … 255 255 256 39-39 257 39-126 258 126-126 259 126-39 260 39-39-126 261 126-126-39 262 39 39 126 126 39-39-126-126

Input pixel 39 39 126 126 39 39 126 126 39 39 126 126

Currentlyy recognized Sequences 39 39 126 126 39 39-39 126 126-126 39 39 39 39-39 39-39-126 126

Encoded Output (9 bits) 39 39 126 126 256 258

260

Bit--Plane Coding Bit Original image

Binary image compression p

Bit 7 Bit 6

Binary image compression

… Bit 0

Binary image compression

Bit plane images E Example l off binary bi image i compression: i Run R length l th coding di (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Bit Planes Bit 7

Bit 3

Bit 6

Bit 2

Bit 5

i 1 Bit

Bit 4

Bit 0

Original g gray g y scale image

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Gray--coded Bit Planes Gray Original bit planes a 7

a6

g7

g6

Gray code: gi  ai  ai 1 for andd

0i6

g 7  a7 ai= Original bit planes

a5

g5

a4

g4

 = XOR

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Gray--coded Bit Planes (cont.) Gray a3

g3

a2

g2

a1

g1

a0

g0

There are less 0-1 and 1-0 transitions in grayed code bit planes. planes Hence gray coded bit planes are more efficient for coding. g

Relative Address Coding (RAC) Concept: Tracking binary transitions that begin and end eack black and white run

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Contour tracing and Coding R Represent t eachh contour t by b a sett off boundary b d points i t andd directionals. di ti l

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Error--Free Bit Error Bit--Plane Coding

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Lossless VS Lossy Coding Source encoder

Lossless coding

f ( x, y )

Mapper

Reduce interpixel redundancy

Symbol encoder

Reduce coding redundancy

Source encoder

Lossy coding

f ( x, y )

Mapper

Reduce interpixel p redundancy

Quantizer

Symbol encoder

Reduce psychovisual redundancy

Reduce coding redundancy (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Transform Coding (for fixed resolution transforms) Encoder Input pu image ge (NxN)

Construct nxn subimages

Forward transform

Quantization process causes The transform coding “lossy” lossy

Quantizer

Symbol encoder

Compressed i image

Decoder Decompressed image age

Construct nxn subimages bi

Inverse transform

Symbol decoder

Examples of transformations used for image compression: DFT and DCT (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Transform Coding (for fixed resolution transforms) 3 Parameters that effect transform coding performance: 1. Type of transformation 2. Size of subimage 3 Quantization algorithm 3.

2D Discrete Transformation Forward transform: N 1 N 1

T (u, v )   f ( x, y ) g ( x, y , u, v ) x 0 y 0

where h g(x,y,u,v) ( ) = fforwardd transformation t f ti kernel k l or basis b i function f ti T(u,v) is called the transform coefficient image. Inverse transform: N 1 N 1

f ( x, y )   T (u, v )h( x, y , u, v ) u 0 v 0

where h(x,y,u,v) = inverse transformation kernel or inverse basis function

Transform Example: Walsh Walsh--Hadamard Basis Functions m 1

 bi ( x ) pi ( u )bi ( y ) pi ( v )  1 g ( x, y , u, v )  h(u, v, x, y )  ( 1) i 0 N N = 2m bk(z) = the kth bit of z p0 (u )  bm 1 (u ) p1 (u )  bm 1 (u )  bm 2 (u ) p2 (u )  bm 2 (u )  bm 3 (u )  pm 1 (u )  b1 (u )  b0 (u )

N=4

Advantage: simple, easy to implement Di d Disadvantage: not good d packing ki ability bili (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Transform Example: Discrete Cosine Basis Functions  ( 2 x  1)u   (2 y  1)v  cos  g ( x, y , u, v )  h(u, v, x, y )   (u ) ( v ) cos    2N 2N     1   (u )   N  2  N

N=4

N=4

for u  0 for u  1, , N  1

DCT is one of the most frequently used transform for image compression. For example, DCT is used in JPG files. Advantage: good packing ability, modulate d l computational i l complexity l i (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Transform Coding Examples F i Fourier

Error

RMS Error = 1.28

Hadamard Original O i i l image i 512x512 pixels Subimage size: 8x8 pixels = 64 pixels Quatization Q ti ti by b truncating t ti 50% of coefficients (only 32 max cofficients are kept.) (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

RMS Error = 0.86

DCT C

RMS Error = 0.68

DCT vs DFT Coding DFT coefficients have abrupt changes at b d i boundaries of blocks

1 Block

Advantage of DCT over DFT is that the DCT coefficients are more continuous i at boundaries b d i off blocks. bl k (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Subimage Size and Transform Coding Performance This experiment: Quatization is made by truncating 75% of transform coefficients

DCT is the best

Si 8x8 Size 8 8 is i enoughh (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Subimage Size and Transform Coding Performance Reconstructed by using 25% of coefficients (CR = 4:1) with 8x8 subimages Zoomed detail Original

Zoomed detail Subimage size: 4x4 ppixels

DCT C Coefficients ffi i t

Zoomed detail Subimage size: 2x2 pixels

Zoomed detail Subimage size: 8x8 p pixels (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Quantization Process: Bit Allocation To assign different numbers of bits to represent transform coefficients based on importance of each coefficient: - More importance coefficeitns  assign a large number of bits - Less importance coefficients  assign a small number of bits or not assign at all

2 Popular bit allocation methods 1. Zonal coding : allocate bits based on the basis of maximum variance, using fixed mask for all subimages 2. Threshold coding : allocate bits based on maximum magnitudes of coefficients

Example: Results with Different Bit Allocation Methods Reconstructed by using 12.5% of coefficients (8 coefficients with largest g magnitude are used)

Reconstructed R t t d by using 12.5% of coefficients (8 coefficients with largest variance are used)

Threshold coding Error

Zonal coding Error

Zoom details (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Zonal Coding Example

Zonal mask

Zonal bit allocation

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Threshold Coding Example

Threshold mask

Thresholded coefficient ordering d i

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Thresholding Coding Quantization 3 Popular Thresholding Methods Method 1: Global thresholding : Use a single global threshold value for all subimages g Method 2: N-largest coding: Keep only N largest coefficients Method 3: Normalized thresholding: each subimage is normalized by a normalization matrix before rounding Bit allocation

 T ( u, v )  ˆ T (u, v )  round   Z u v ( , )  

Restoration before decompressing ~ T (u, v )  Tˆ (u, v ) Z (u, v ) Example p of Normalization Matrix Z(u,v)

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

DCT Coding Example

(CR = 38:1)

Error image RMS Error = 3.42

Zoom details

(CR = 67:1) Method: - Normalized Th h ldi Thresholding, - Subimage size: 8x8 pixels Blocking Artifact if at Subimage boundaries (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Wavelet Transform Coding: Multiresolution approach Encoder Input pu image ge (NxN)

Wavelet transform

Quantizer

Symbol encoder

Compressed image Decoder Decompressed image

Inverse wavelet transform f

Symbol decoder

Unlike DFT and DCT, Wavelet transform is a multiresolution transform.

What is a Wavelet Transform One up on a time, human uses a line to represent a number. For example = 25 5 With this numerical system, we need a lot of space to represent a number 1,000,000. Then, after an Arabic number system is invented, life is much easier. We can represent a number by a “digit number”:

X,XXX,XXX An Arabic bi number b is i one kind of multiresolution Representation Representation.

The 1st digit = 1x The 2nd digit g = 10x The 3rd digit = 100x …

Like a number, any signal can also be represented by a multiresolution data structure, the wavelet transform.

What is a Wavelet Transform Wavelet transform has its background from multiresolution analysis and subband coding. Other important background: - Nyquist theorem: The minimun sampling rate needed for sampling a signal without loss of information is twice the maximum frequency of the signal. -We can perform frequency shift by multiplying a complex sinusiodal signal in time domain. f ( x, y )e j 2 ( u0 x v0 y )  F (u  u0 , v  v0 )

Wavelet History: Image Pyramid If we smooth and then down sample an image repeatedly, repeatedly we will get a pyramidal image:

Coarser,, decrease resolution

Finer, increase resolution

Pyramidal structured image

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Image Pyramid and Multiscale Decomposition

Image NxN

Prediction P di ti Error (loss details) NxN

+ 

-

Smooth

Down Sampling By 2

Image N/2xN/2

Question: What Information is loss after down Sampling?

Up Sampling By 2

Answer: Loss A L Information is A prediction error image:

Interpolate

Predicted Image NxN

Image Pyramid and Multiscale Decomposition (cont.) Hence we can decompose an image using the following process Image NxN

Approxi-mation mation Image N/2xN/2

Smooth and d down sampling li by 2

Up sampling by 2 and interpolate

+



Prediction Error NxN

Approxi-mation mation Image N/4xN/4

Smooth and d down sampling li by 2

Up sampling by 2 and interpolate

+



Prediction Error N/2xN/2

….

Image Pyramid and Multiscale Decomposition (cont.) Approximation image N/8xN/8 Original Image NxN

Prediction ed c o eerror o N/4xN/4 N/ N/ = Prediction error N/2xN2

Prediction P di i error (residue) NxN

Multiresolution Representation

Multiresolution Decomposition Process

Note that this process is not a wavelet decomposition process ! (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Example of Pyramid Images

Approximation Images (using G Gaussian i Smoothing)

Prediction residues

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Subband Coding Subband decomposition process h0(n)

Approximation

Down Sampling by y2

LPF

a(n) N/2 points

x(n) N points h1(n) HPF

Detail

Freq. Freq shift by N/2

Down S Sampling li by 2

d(n) ( ) N/2 points

All information of x(n) is completely preserved in a(n) and d(n).

Subband Coding (cont.) Subband reconstruction process

a(n) N/2 points

Up Sampling by y2

g0(n) Interpolation 

d(n) ( ) N/2 points

Up S Sampling li by 2

g1(n) Interpolation

Freq. Freq shift by N/2

x(n) N points

Subband Coding (cont.)

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

2D Subband Coding

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Example of 2D Subband Coding Approximation: filteringg in both x and y directions using h0(n)

Horizontal detail: filtering in xdirection using h1(n) and in ydirection using h0(n)

Vertical detail: filtering in xdirection d ec o using us g h0(n) and in ydirection using h1(n)

Diagonal detail: filtering in both x and y directions using h1(n)

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

1D Discrete Wavelet Transformation x(n) N points

h (n)

2

h (n)

2

Note that the number of points of x(n) and wavelet coefficients are equal.

d1(n) N/2 points

h (n)

2

h (n) ( )

2

(n) = a wavelet function ((n)) = a scaling sca g function u ct o

d2(n) N/4 points

h (n)

2

d3(n) N/8 points

h (n) ( )

2

a3(n) ( ) N/8 points i t Wavelet coefficients (N points)

1D Discrete Wavelet Transformation

2D Discrete Wavelet Transformation d = diagonal detail h = horizontal detail v = vertical detail a = approximation

Original image NxN d1

h1

v1

a1 d2

h2

Level 1

Level 3 v2

a2

Level 2

d3 h3 v3 a3

2D Discrete Wavelet Transformation (cont.)

Original image NxN

a3h3 h2 a3h3 v3d3 vv2 d d2

h1

v1

d1

Wavelet coefficients NxN d = diagonal di l detail: d t il filtering filt i in i both b th x andd y directions di ti using i h (n) ( ) h = horizontal detail: filtering in x-direction using h (n) and in y direction using h (n) v = vertical detail: filtering in x-direction using h (n) and in y direction using h (n) a = approximation: filtering in both x and y directions using h (n)

Example of 2D Wavelet Transformation

Original Image

Original image

Example of 2D Wavelet Transformation (cont.)

LL1

HL1

LH1

HH1

The first level wavelet decomposition

Example of 2D Wavelet Transformation (cont.)

LL2

HL2

HL1 LH2

LH1

HH2

HH1

The second level wavelet decomposition

Example of 2D Wavelet Transformation (cont.)

LL3

HL3

LH3 HH3

HL2

HL1 LH2

LH1

HH2

HH1

The third level wavelet decomposition

Example of 2D Wavelet Transformation

Level 1

Level 2

Example of 2D Wavelet Transformation

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Examples: Types of Wavelet Transform

Haar wavelets

Daubechies D b hi wavelets

Symlets Sy es

Biorthogonal wavelets

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Wavelet Transform Coding for Image Compression Encoder Input pu image ge (NxN)

Wavelet transform

Quantizer

Symbol encoder

Compressed image Decoder Decompressed image

Inverse wavelet transform f

Symbol decoder

Unlike DFT and DCT, Wavelet transform is a multiresolution transform.

Wavelet Transform Coding Example

(CR = 38:1)

Error Image RMS Error = 2.29

(CR = 67:1)

Error Image RMS Error = 2.96

Zoom details No blocking Artifact (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Wavelet Transform Coding Example (cont.)

(CR = 108:1)

Error image RMS Error = 3.72

(CR = 167:1)

Error image RMS Error = 4.73

Zoom details (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Wavelet Transform Coding vs. DCT Coding Wavelet

DCT 8x8

(CR = 67:1)

(CR = 67:1)

Error image RMS Error = 2.96

Error image RMS Error = 6.33

Zoom details (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Type of Wavelet Transform and Performance

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

No. of Wavelet Transform Level and Performance

Threshold Level and Performance

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Table 8.14 (Cont’)

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

T bl 8.19 Table 8 19 (Con’t) (C ’t)

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Lossless Predictive Coding Model

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Lossless Predictive Coding Example

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Lossy Predictive Coding Model

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Delta Modulation

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Linear Prediction Techniques: Examples

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Quantization Function

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Lloyd--Max Quantizers Lloyd

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Lossy DCPM

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

DCPM Result Images

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Error Images of DCPM

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.