Question: In this assignment, you will perform color image reconstruction using Bayer pattern and color reproduction using Dithering. (I) BasicBayer (25%): In this part we reconstruct
In this assignment, you will perform color image reconstruction using Bayer pattern and color reproduction using Dithering.
(I) BasicBayer (25%): In this part we reconstruct this RGB color image (this original color image is only for your reference, you will not use it inside your code) given a corresponding Bayer pattern image (your code takes the Bayer pattern as input). The Bayer pattern used is GRBG. In this part, the task is to reconstruct green, red and blue channels from the Bayer pattern image. The reconstruction of channels should be done using these Green, Red, and Blue image masks, where black color has been used for valid entries, and the white for empty entries. These masks together form the Bayer color pattern, viz., GRBG.
You should complete the provided template notebook. Interpolate the empty entries in the reconstructed green, red and blue channels independently of each other. Call the channels IG, IR, andIB respectively. Next, combine these channels into a full RGB color image and display this image. The output has to visually match this RGB color image.
The details about Bayer filter and the reconstruction process are provided in the presentation Bayer Filter and Demosaicing.pdf. You can also refer to the Wikipedia pages on Bayer filter and demosaicing. In case you are interested in even more details, here's a nice reference: http://www.site.uottawa.ca/~edubois/lslcd/
As a simple scheme of pixel interpolation, you can follow these rules:
- For reconstruction of the green channel IG, note with respect to the pattern Green that the green channel value at location B can be interpolated as:B = (A+C)/2. Similarly, E = (A+I)/2. For interpolating the green channel value at location G, use G = (F+C+H+K)/4, etc.
- For reconstruction of the red channel IR, with respect to the pattern Red, use these rules of interpolation: C = (B+D)/2, F = (B+J)/2, G = (B+D+L+J)/4, etc. Note that for the red channel here, the first column and the last row are entirely empty. You need to fill them in by copying the second column and the second last row, respectively.
- For reconstruction of the blue channel IB with respect to the pattern Blue, use these rules:F = (E+G)/2,I = (E+M)/2,J = (E+G+O+M)/4, etc. Notice that here the last column and the first row are entirely empty. So, fill them by copying to the second row and the last but one column, respectively.
(II) Floyd-Steinberg dithering (25%):
Make a notebook that reads this RGB image and implements a Floyd-Steinberg dithering to change the representation of this image. You should complete the provided template notebook.
The template notebook
- Dynamically calculates an N-colour palette for the given image (in the notebook the variable nColours determines the number of colours in the palette).
- Uses the KMeans clustering algorithm to determine the best colors.
- Makes a kd-tree palette from the provided list of colors.
- Applies the Floyd-Steinberg method to change the representation of the image.
More details about Dithering techniques can be found in the Wikipedia pages on Dither and Palette.
(III) Affine Transformation (25%):
Rotation and Scaling are the specializations of affine transformation that can be applied to an Image. In this assignment, you will perform image Rotation and Scaling. You should complete the provided template notebook.
Read this color image
-
Create a Rotation transformation matrix T_r (90 degrees)
-
Create a Scale transformation matrix T_s (by two), as shown below, which scales the placement of the points in all directions.
-
Combine the transformations: The neat thing about affine transformations being essentially linear transformations is that you can combine the transformations and apply them in one step. To demonstrate this you should apply the dot product (matrix multiplication) of the previous two transformation matrices.
-
Apply the combined transformations in one step to the spatial domain of the image data. (Do not use any built-in function to apply transformation)
-
Plot the image after applying the transformation. It should clearly show that the original image has been rotated 90 degrees clockwise and scaled up 2X. However, the result obviously diminished as you can easily see discontinuity in the pixel intensities.
-
Develop an implementation of nearest neighbour interpolation based on the backwards mapping, using the inverse of the transformation matrix T, of the pixel coordinates in the transformed image to find either the exact match or nearest neighbour in the original image. (You can use any built-in function for nearest-neighbour interpolation such as skimage.transform.warp)
(IV) Use skimage-Python to merge two pictures (25%):
In this exercise, you will need to write the code that can perform image stitching. You can complete this code or you can write everything from scratch. Download im1 and im2 for your use. There are a standard procedure and algorithm for image stitching, which can be briefly summarized as the following steps:
Step 1) Detect local features with BRIEF or ORB in the two images I1 and I2. You may use any other feature extractor like SIFT, SURF.
Step 2) Perform feature matching between I1 and I2, based on (approximate) nearest neighbour search, to generate a set of putative matching feature pairs. Usually, the similarity or distance between two feature descriptors is considered as the metric for matching.
Apply further constraint, either geometrical or non-geometrical, to exclude outliers (wrong matches) and retain inliers (true matches).
Step 3) Compute the homography matrix between I1 and I2 based on the true matches. A homography matrix is a 3-by-3 matrix that defines the transformation between the two images, i.e.,
H12 * [x1, y1, 1]T = w * [x2, y2, 1]T
where (x1, y1) and (x2, y2) are the image coordinates of the matching features in the two images, H12 is the homography matrix and w is a scaler.
Step 4) Perform image stitching based on the homography matrix. Basically, you will need to calculate the new image coordinates for every pixel in I1, with respect to I2 coordinate system. If H12 is calculated correctly, you should expect that after transformation, the features in I1 now have the same coordinates to their correspondences in I2, i.e., they are translated to the matching places.
The resulting stitched image should look similar to this.
Note that steps 3) and 4) could be correlated. To decide the inliers, we need a model H to see if the matches fit it. On the other hand, to obtain H, we need sufficient true matches to complete the estimation. This is a chicken-egg problem. Refer to RANSAC if you are interested.
To calculate the new image coordinates, try to avoid using loops to access every pixel. Matrix operations are usually much faster than loops. To save you time, a reference implementation of stitching with detailed comments is available in the provided code. You can use it in your code (but may need to modify a bit accordingly). You are also welcome to develop your own implementation.
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
