Lab (2) - Image Processing

Representation of an Image as a Matrix
Sampling and Quantization
Effect of Sampling Rate on Image Quality
Effect of Quantization on Image Quality


Preparation
Use MATLAB to complete the in-lab and post-lab work.  You will create a MS Word file for the in-lab work and another one for the post-lab.  Include your programs or list of commands that you have used. Also, include the output images in your MS Word file.

Viewing an image as a matrix
In class we discussed that an image with M rows and N columns and an intensity depth of 2m,  requires about:
b = M*N*m 
bits of storage. If we assume 8-bits/per byte, then we can divide b by 8 to get an estimate to the number of (B =  b/8) Bytes required to store that image.
 
et
MRI Brain Scan (et01.jpg)
sl00
CT Scan of Spleen (sl00.gif)
                                                                                             
Sampling
As we discussed in class, an image is denoted by f(x,y), where f represents the intensity and (x,y) represents the spatial coordinate.  Image sampling is the process of digitization of the spatial coordinates (x,y). An image is a continuous function in spatial coordinate.  We use a discrete sample with MxN points to represent the image digitally.  Selection of M and N is one of the factors that determines the digital image quality.  It is the factor for determining the image resolution.  In this part we want to see the effect of sampling on image quality.  As we discussed in class, sampling reflects the number of pixels you choose to represent your image with.

The above two images are both 256x256 images, already in digital format. Since we don't have a way to use the actual images and scan them at different sampling rate, for now, we assume that the above two images are the original images and we sample them instead.  Thus, we will try to store the images with different number  of pixels (meshes).  Basically, we will trick ourselves. 

Assuming the above images are the original images both of 256x256 in size.  If these images were sampled at 1/2 of the existing rate, then we would have 1/2 of the 256 pixels on each row and 1/2 of the 256 pixels on each column.  In a sense, if we wanted to keep the same appearance, pixels would become twice as big on each dimension.  Does this give you a clue on how to reduce the sampling rate by 2? 

Well, one easy way to do this is to copy a pixel on the pixel next to and below it.  This way the pixel appears twice as big, and we have removed (replaced) every other pixel in reality.   This creates an image of the original image that is sampled at 1/2 sample rate.

If you wish to sample by 4, then you would copy the first pixel over its immediate three pixels, both on rows and columns.  By the way, this process is called pixel replication. Following is the result of 1/4 sampling for et01.jpg image.

et01-1forth

Example: 
Suppose original matrix was:
f = [2     4   6      8
       8   10  12   16
       20 20  0       8
       6   8    4       2]

This matrix sampled by 1/2 may look like this (4 is replaced by 2 so as 8 and 10, and so forth):

f = [2     2   6     6
       2    2   6     6
       20 20  0     0
       20 20  0    0]

As you may have noticed, we have represented the original f (image) with 4 pixels.  This matrix sampled by 1/4 may look like this:
f = [2     2   2     2
       2    2   2     2
       2    2   2     2
       2    2   2     2]

Now the entire image is represented with one pixel only.

Exercise 2.1
Use MATLAB to read the above two images then sample them by 1/2, 1/4, 1/8, 1/16, and 1/128 of the original image.  This creates 5 images for each of the above images.  Can you (subjectively) tell which one of these two images were most affected by the reduction in sample rate?

Quantization
To represent an image in digital form, the intensity (amplitude) must be digitized too.  In order to do this, we will break the intensity to some levels called gray levels (for a gray scale image).  The quantization is commonly done based on the power of 2.  Thus, we represent the gray scale as a number that is a power of two.  A pixel in an 8 bit image ( 28 = 256 gray levels) may have 256 shades of gray, the value 255 for white and 0 for black.  Any intensity value between these two is considered as gray.

To see the effect of quantization, once again we use the et01.jpg and sl00.gif images as the original images and we will quantized them at different quatization levels. Both of these two images have (28) = 256 shades of gray.  If we quantize these images at a 1/2 of that intensity, then we would have (27) = 128 shades of gray, i.e., 0 for black and 127 for white.  Thus, in that case 0-255 would map to 0-127.  Did this give you a clue on how to quantize by 1/2?

To quantize the above image with 127 gray levels, we map every two shades of gray to one. Thus, 254 and 255 will be mapped to 127, 252-253 will be mapped to 126, ..., 1 and 0 will be mapped to 0. Similarly, you can quantize with 64, 32, 16, 8, ... levels. 

Following is the et01.jpg image quantized at 1/4 of the original intensity.
et01Q1forth

The maximum intensity in this image is 64 = 256/4.

Example: 
Suppose the original matrix in 8-bit format was given as:
f = [122     114      16     118
       80         81    127    126
      255      254        0        1
        16       18      14      13]

Quantized by 1/2, the intensity values gets map differently to 0-127:
f = [61     72      8     59
       40       40     63    63
      127      127     0      0
        8        9       7      6]

Exercise 2.2
Quantize the two original images by 1/2, 1/4, 1/8, 1/16, and 1/128 of the original gray level.  This creates 5 images for each of the above images.  Can you tell which one of these two images were most effected (subjectively) by the reduction in quantization levels?

At what quantization level do you start seeing inconstancies?

Preparations:
You will complete some exercises during the lab.  Those are in-lab work.  Create a file in MS Word and cut and paste the required commands and the resulting images in that file.  Make sure to mark the activity number and have your name on that file.  Once you are done with the in-class activities, e-mail me the file as an attachment.  Please send only one file for the entire in-lab.  Write in-lab in the subject line of your e-mail. 

Similarly, you have to complete a post-lab and e-mail me the file as an attachment.  Write post-lab on the subject line of that e-mail. 

It is best that you create a
lab3 directory on the machine you are using for your work. Try to use the same machine every time you come to the lab.  Transfer your files (FTP) on cs so you have backup copies in case you need them.  

Intensity Transformation Functions

The intensity transform functions,  T, depends only on intensity values and not explicitly on the spatial location (x,y).  This type of transformation functions can be written as:

s = T(r) 

where r denotes the intensity of a pixel and s the intensity of the output image, both at any corresponding point (x,y) in the image. 

In MATLAB a function imadjust is the basic IPT tool for intensity transformations of gray scale images.  It has the syntax:

g = imadjust(f, [low_in  high_in], [low_out   high_out], gamma)

This function maps the intensity values on the original image in the range [low_in  high_in] to their corresponding values in the range [low_out   high_out] to produce a new image g.  This function uses one of the three mapping options shown below as indicated by gamma.  Values below low_in map to low_out and above high_in map to high_out.  Note that all these 4 values are between 0 to 1.  The input image can be of class unit8, unit16, or double.   The output image will have the same class as the input image.  The imadjust will make adjustment in the supplied values based on the class.  For example, if we are using uint8, it multiplies the supplied values by 255 and if we are using uint16 then it multiples the supplied values by 65535.  Use the help command in MATLAB to learn more about imadjust as you wish.   The default values in [ ]  are 0 and 1.  Parameter gamma specifies the shape of the curve that maps the intensity values in f to create g.   The three possibilities are shown below. 

Note: We worked out several examples in class where we found out that the above function acually takes the values between 0 and 1, but multiplies the number by the maximum intensity value as determined by the class type to get the intensity values.  For instance [0 1] for data of uint8 type is the same as [0  255].  Fractions are worked out in a similar way.

gamma

Exercise 2.3
The following image is a gray scale brain image in jpeg format.  You can save the image in your working directory by right clicking on the image, then use Save As to save it.  Apply the imadjust with 5 different gamma values to adjust the existing intensity values on the image to the range [0 to 255].  Copy the results in your MS word file.  Note that I haven't asked for any particular gamma values, but it is best to spread them such that you get a feeling of how the above three options will work.   I.e., it is perhaps best to have gamma = 1, and two other on either side of 1.

im1
Image (1)

Depending on the values you have selected for the above 5 cases, you have noticed some changes in the output image.  Create a table similar to the one shown below and summarize the effect of each gamma value in that table.  Place Yes/No.
gamma values (fill out to match your values)
Values closer to the min  intensity became smaller
Values closer to the max  intensity became larger Overall image become brighter
Overall image became darker










1















Side note:  If you want to find the minimum and maximum value in the image, you can use:
minf = min(min(f))      or         maxf = max(max(f))

Post-Lab - Due Wednesday September 27

Include all our MATLAB files in your submission and the images you have created in your MS Word file.

Question (1) - Use the imadjust MATLAB command to solve the Exercise 2.3 problem above.  Please include the images and use the whos command to create a table for the size of the images (in bytes or bits) after each stage.

Question (2) - Use either the nearest neighbor interpolation or pixel replication (whichever you find most suitable or easier to use), the concepts introduced in this lab, and the material in Chapter (3) and Chapter (4) of the textbook to:

    a) Shrink the above images by 2
    b) Shrink the above image by 4
    c) Zoom in (enlarge) by 2
    d) Zoom in by 4

Question (3) - Implement the Bit-Plane Slicing for Image (1), which is an 8-bit image.  Thus, first use the procedure I have given in the Quiz (4) and in the worksheet to create 8 images for each of the Bit Planes.  Note that I haven's ask for reconstruction.

Question (4) -
Reconstruct the image using 4th Bit-Plane

Question (5) - Reconstruct using 7th Bit Plane