ECE 5554 / ECE 4554: Computer Vision Fall 2018

Instructions

  • Due date: 11:55pm on Wednesday, Oct 31th, 2018

  • Handin: through https:canvas.vt.edu

  • Starter code and data: hw4.zip

  • Deliverables:

    • Please prepare your answer sheet using the filename of FirstName_LastName_HW4.pdf.

    • Include all the code you implement in each problem. Remember to set relative paths so that we can run your code out of the box.

    • Compress the code and answer sheet into FirstName_LastName_HW4.zip before uploading it to canvas.

Part 1: SLIC Superpixels (50 points)

 

Overview

Superpixel algorithms group pixels into perceptually meaningful regions while respecting potential object contours, and thereby can replace the rigid pixel grid structure. Due to the reduced complexity, superpixels are becoming popular for various computer vision applications, e.g., multiclass object segmentation, depth estimation, human pose estimation, and object localization.

In this problem, you will implement a simple superpixel algorithm called Simple Linear Iterative Clustering (SLIC) that clusters pixels in the five-dimensional color and pixel coordinate space (e.g., r, g, b, x, y). The algorithm starts with a collection of K cluster centers initialized at an equally sampled regular grid on the image of N pixels. For each cluster, you define for a localized window 2S x 2S centered at the cluster center, where S = sqrt(N/K) is the roughly the space between the seed cluster centers. Then, you check whether the pixel within the 2S x 2S local window should be assigned to the cluster center or not (by comparing the distance in 5D space to the cluster center). Once you loop through all the clusters, you can update the cluster center by averaging over the cluster members. Iterate the pixel-to-cluster assignment process till convergence or maximum iterations reached.

For more details, please see SLIC Superpixels Compared to State-of-the-art Superpixel Methods, PAMI 2012.

Write-up

  • (5 points) (a) Explain your distance function for measuring the similarity between a pixel and cluster in the 5D space.

  • (5 points) (b) Choose one image, try three different weights on the color and spatial feature and show the three segmentation results. Describe what you observe.

  • (5 points) (c) Choose one image, show the error map (1) at the initialization and (2) at convergence. Note: error - distance to cluster center in the 5D space.

  • (10 points) (d) Choose one image, show three superpixel results with different number of K, e.g., 64, 256, 1024 and run time for each K (use tic, toc)

  • (25 points) (e) Run your algorithms on the subset (50 images) of Berkeley Segmentation Dataset (BSD) and report two types metrics (1) boundary recall and (2) under-segmentation error with K = 64, 256 and 1024. Also, report averaged run time per image for the BSD

Part 2: Graph-cut Segmentation (50 points)

Graph Cut 

Overview

Let us apply Graph-cuts for foreground/background segmentation. In the “cat” image, you are given a rough polygon of a foreground cat. Apply Graph-cuts to get a better segmentation.

First, you need an energy function. Your energy function should include a unary term, a data- independent smoothing term, and a contrast-sensitive smoothing term.

Your pairwise term should include uniform smoothing and the contrast- sensitive term. To construct the unary term, use the provided polygon to obtain an estimate of foreground and background color likelihood. You may choose the likelihood distribution (e.g., color histograms or color mixture of Gaussians.).

Apply graph cut code for segmentation. You must define the graph structure and unary and pairwise terms and use the provided graph cut code or another package of your choice.

Write-up

  • (5 points) (a) Explain your foreground and background likelihood function.

  • (10 points) (b) Write unary and pairwise term as well as the whole energy function, in math expressions.

  • (15 points) (c) Your foreground and background likelihood map. Display P(foreground|pixel)P(foreground|pixel) as an intensity map (bright = confident foreground).

  • (20 points) (d) Final segmentation. Create an image for which the background pixels are blue, and the foreground pixels have the color of the input image.

Hint

  • You can use MATLAB GMM functions for estimating the foreground and background distributions.

Graduate credit (max possible 30 points extra credit)

In your answer sheet, describe the extra points under a separate heading.

Modern interactive segmentation

  • up to 15 points: Use the PolyRNN tool and annotate a small collection of images. (More details on Piazza).

SLIC superpixel

  • up to 15 points: Try to improve your result part 1-(e). You may try different color space (e.g., CIELab, HSV) (See Sec 4.5 in the paper), richer image features (e.g., gradients) or any other ideas you come up with. Report the accuracy on boundary recall and under-segmentation error with K = 256. Compare the results with part 1-(e) and explain why you get better results.

Graph-cut Segmentation

  • up to 15 points: Try your Graph-cut implementation on two more segmentation results. Specify the foreground object of interest with a bounding box and use Graph-cut to refine the boundary. Show us (1) the input image, (2) the initial bounding box and (3) the segmentation result.

 
  • up to 10 points: Use your Graph-cut to segment out an object in one image. Then, copy and paste the segmentation onto another image. See here for some ideas.

 

Acknowledgements

This homework is adapted from the projects developed by Derek Hoiem (UIUC)