CS180 Project 4A: Image Mosaicing

Adnan Aman

Part 1: Capturing Images

I captured several sets of images for this project. The first set was taken at CalHacks in San Francisco, while the others were shot around Berkeley. I made sure to keep some overlap between the images.

SF Image 1
San Francisco Scene 1
SF Image 2
San Francisco Scene 2

Part 2: Computing Homographies

To compute the homography matrix, I set up linear equations based on my point pairs. Each pair of points gives us two equations. Since H is a 3x3 matrix with the last element set to 1, I needed to solve for 8 unknowns. I used least squares to solve this since I picked more than 4 points.

Part 3: Image Rectification

To verify my homography computation and warping implementation, I tested the system on simple rectangular objects. These examples show how well the rectification worked to fix perspective distortion:

Original Square
Original Image
Rectified Square
Rectified Image
Original Image
Original Image
Rectified Image
Rectified Image

Part 4: Image Mosaicing

For blending the images together, I used distance transform with a convex hull to create weights that fade from the center to the edges. I originally tried just using rectangles for the image bounds but this gave weird edges, so the convex hull worked much better. I added Laplacian blending to handle the seams better.

Some challenges I encountered:

SF Mosaic
San Francisco Mosaic Result
Berkeley Mosaic
Berkeley Mosaic
Additional Mosaic
Additional Berkeley Scene

What I Learned

I learned how to warp images using homography matrices and set up linear equations to solve for H. Seeing how rectification fixed perspective warping helped me understand the math better. The distance transform with convex hull really helped get clean edges when blending the images together.


CS180 Project 4B: Feature Matching for Autostitching

Best Result: Outdoor Scene

This scene shows a big difference between manual and automatic feature detection. My hand-picked points weren't great and caused some alignment issues. The RANSAC algorithm found much better corresponding points automatically, which made the final mosaic line up way better.

Outdoor Scene 1
Input Image 1
Outdoor Scene 2
Input Image 2
Manual Outdoor Mosaic
Manual Result - Not Great
Automatic Outdoor Mosaic
RANSAC Result - Much Better Alignment

Feature Detection Process

Following the MOPS paper, here's how I found and matched features:

Harris Corner Detection and ANMS

First found corners using Harris detector, then did ANMS by: 1. Finding the distance to the nearest stronger corner for each point 2. Picking the 500 points with largest distances to spread them out

Before ANMS
Harris Corners Before ANMS
After ANMS
After ANMS - Better Distribution

Feature Matching

For each corner: 1. Took 40x40 patches and downsampled to 8x8 2. Used nearest neighbors to find matches 3. Kept matches where the best match was way better than the second best

Feature Matches
Feature Matches Between Image Pair

RANSAC Results

Used RANSAC to find the best matching points and compute the homography:

RANSAC Points
RANSAC-Selected Points

Other Auto-Stitching Results

Indoor Scene

The manual and automatic results were pretty similar here:

Indoor Scene 1
Input Image 1
Indoor Scene 2
Input Image 2
Manual Indoor Mosaic
Manual Result
Automatic Indoor Mosaic
RANSAC Result

Shot Sequence

Another case where both methods worked well:

Shot 1
Input Image 1
Shot 2
Input Image 2
Manual Mosaic
Manual Result
Automatic Mosaic
RANSAC Result

What I Learned

The coolest part was seeing how feature matching could automatically find corresponding points between images. It was pretty amazing that RANSAC could get even better results than manual point selection in some cases. This really showed me how panorama software works under the hood.