Lab (2) - Image Processing

Representation of an image as a matrix
Sampling and Quantization
Effect of sampling rate on image quality
Effect of quantization on image quality

In this lab you will learn to represent an image as a matrix of intensities in grey-scale.  You will have a chance to make a beast out of a beauty or a beauty of a beast.

Viewing an image as a matrix
Perhaps the most common image format used to view an image as a matrix is Portable GreyMap (PGM) format.  You can learn more about this image format by using the man command on UNIX.  For the purpose of this lab, I will only refer to the most common 2-D format.  A PGM file contains the following information:

P2  (This is the tag, marking the 2-D nature of the image)
nx  ny (These numbers represent number of rows and columns respectively).
Grey_level  (This is the gray level or the intensity value (white in gray scale). In an 8-bit image, that is 255.
x(i,j) (This is the intensity values, you can list them row-by-row or all numbers at once, as long as they are separated by a blank space)

An example of a 4x4, 32 bit image:  sample.pgm
P2
#
4 4
255
12 124 214 255
123 14 200 120
12 8 112 245
112 124 254 255

If you save this in a file with .pgm extension, you have gotten yourself a 4x4 8-bit image (why 8-bit?) and you can view it using xv software (type %xv& to get the xv's main menu).   Note that an image has NxM pixels which are represented in 2m grey levels.  Thus, the image size is: b = NxMxm bits.  In the case that M = N, then we will have, b = N2m.

In order to read a PGM image, we will use a C++ code.  Matlab does not read PGM files but we can remove the header and read the remaining part of the image, the intensity values, as a 2-D matrix.  I have prepared a C++ code that helps you read PGM images.  Copy the ReadPGM.C file from my directory using the following command:
% cp ~rt/3531_s01/codes/ReadPGM.C .    (don't forget the .)

This code is not very flexible, a quick and dirty one, but it works.  For images with different sizes, you need to set the image size, nx and ny (ncloumns and nrows), at the top of the code and compile it every time you change the size.  The best way to write the code is to read nx and ny from the input image, then dynamically create the matrix with the exact size.

This code will ask you for an input image and an output file.  Note that if you are creating a PGM file, then your output has to have .pgm extension as well.  The current version of the code reads the input image and prints the same image to the output file.  I advise you to keep a clean copy of this code.  You need to read many images in the lab and write them in different ways.  This code is a good place to start.

To start the lab you need some images.  Copy two sample images from the same directory from which you copied the ReadPGM.C code. These two images are lena.pgm and sl00.pgm.  The first one is the picture of a model, was famous at the time before you were born and when I was too young to remember, and the second is a CT spleen image.

lena.pgm                                                                                           sl00.pgm
Sampling
As we discussed in class, an image is denoted by f(x,y), where f represent the intensity and (x,y) represents the spatial coordinate.  Image sampling is the process of digitization of the spatial coordinates (x,y). An image is a continuous function in spatial coordinate.  We use a discrete sample with MxN points to represent the image digitally.  Selection of M and N is one of the factors that determines the digital image quality.  It is the factor for determining the image resolution.  In this part we want to see the effect of sampling on image quality.

The above two images are both 256x256 images, already in digital format. Since we don't have a way to use the actual images and scan them at different sampling rate, for now, we assume that the above two images are the original images and we sample them as if they are the original images.  Here is the method we will use to trick ourselves.  The above images are both 256x256 in size.  If these images were sampled at 1/2 of the existing rate, then we would have 1/2 of the 256 pixels on each row and 1/2 of the 256 pixels on each column.  In a sense, pixels would become twice as big on each dimension.  Does this give you a clue on how to reduce the samling rate by 2?  It looks like having the above images with two pixels of the same intensity for each two neighboring pixels in the image.  To accomplish this, we can copy the pixels on even locations onto the pixels in the odd location, i.e, on each row, the first pixel is copied on the second, the third on the fourth, and so forth.  This creates an image that looks like the original image that is sampled at 1/2 sample rate. If you wish to sample by 4, then you would copy the first pixel onto the next three pixels.  By the way, this process is called pixel replication.

You need to write a function that takes f(x,y) and returns the sampled image as f(x,y) again.  You will determine the sample rate by passing an integer to that function as a parameter. Thus, your function should look like this;

void Sample_spatial(int x[nrows][ncolumns], int factor)

Choose the factor to be a power of two, 2, 4, 8, 16... (256 = 28) and make sure you don't use an unreasonable sample rate. You may want to use a file naming procedure to name your files. For example: The lena.pgm image that is sampled by 1/2 of the original rate, may be named lena2.pgm and so forth.  You may want to create a set of images and then view them using xv.

Exercise 2.1
Sample the two images by 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128 sample rate of the original image.  This creates 7 images for each of the above images.  Can you subjectively tell which one of these two images were most effected by the reduction in sample rate?

When do you start seeing checkerboard effect ?

Quantization
To represent an image in digital world, the intensity (amplitude) must be digitized too.  In order to do this, we will break the intensity to some levels called gray levels (for a gray scale image).  The quantization is commonly done based on the power of 2.  Thus, we represent the gray scale as a number that is a power of two.  A pixel in an 8 bit image ( 28 = 256 gray levels) may have 256 shades of gray, the value 255 for white and 0 for black.  Any intensity value between these two is considered as gray.

To see the effect of quantization, once again we use the above images as the original images and we will quantized them at different quatization levels. In the above images, we have 256 shades of gray (28).  If we quantize these images at a 1/2 of that scale, then we would have 127 (27) shades of gray. Thus, in that case 0-255 would have be mapped to 0-127.  Did this give a clue on how to do it?

To quantize the above image with 127 gray levels, we map every two shades of gray to one. Thus, 254 and 255 will be mapped to 127, 252-253 will be mapped to 126, ..., 1 and 0 will be mapped to 0. Similarly, you can quantize with 64, 32, 16, 8, ... levels.

Your function should look like this:

void Quantize(int x[nrows][ncolumns], int gfactor)

Take a note of your observations.
m = 7
m = 6
m = 5
m = 4
m = 3
m = 2

Note that in this case, you need to change the intensity flag at the top of the new image file to see the new image corrected.  Thus, when you go from 256 gray levels to 127 gray levels, you are to change the 4th number in the header from 255 to 127. The new header will look like this:
P2
#
4 4
255

Same thing for other quantization levels.

Exercise 2.2
Quantize the two original images at 128, 64, 32, 16, 8, 4, 2 gray levels.  This creates 7 images for each of the above images.  Can you tell which one of these two images were most effected (subjectively) by the reduction in quantization levels?

At what quantization level do you start seeing inconstancies?

Post-Lab - Due Friday September 20
For the post-lab assignment use either nearest neighbor interpolation or pixel replication whichever you find most suitbale or easier to use.
Use the concepts introduced in this lab and those in section 2.4.5 of the book to: