Header Image

CS180: Project 2 - Fun with Filters and Frequencies!

Adnan Aman

Project Overview

In this project, we explore methods of algorithmic image filtering. We detect edges using derivatives and gradients, simulate high-pass and low-pass filtering, and blend images together using Laplacian stacks.

Part 1: Fun with Filters

1.1 Finite Difference Operator

We take the x and y partial derivatives of an image by convolving it with finite difference operators. Here's the cameraman image and its convolutions:

Original
Original
Partial Derivative in x
Partial Derivative in x
Partial Derivative in x
Partial Derivative in y
Partial Derivative in x
Edges

Gradient magnitude computation is a process in image processing for edge detection. The process involves the following steps:

  1. Compute the partial derivatives of the image in both x and y directions using finite difference operators.
  2. For each pixel, calculate the magnitude of the gradient vector using the formula: magnitude = sqrt(dx^2 + dy^2).
  3. The resulting gradient magnitude image highlights areas of change, effectively detecting edges in the original image.

1.2 Derivative of Gaussian (DoG) Filter

To reduce noise, we apply a Gaussian filter before the derivative convolution. Here are the results:

Gaussian Blur
Gaussian Blur
DoG in x
DoG in x
DoG in y
DoG in y
DoG in y
Smoothed Gradient Magnitude (Edges)
When using the Derivative of Gaussian (DoG) filter:

  1. We achieved the same results as the two-step process (Gaussian blur followed by gradient computation).
  2. Q1.2: The edges appear smoother and more continuous compared to the simple gradient method.
  3. Q1.2: The edges are very noisy in the previous finite differency operator without applying the Gaussian filter. It highlights the increase of intensity of pixels along x and y with the partial derivatives.
  4. Q1.2: There is less apparent noise in the final edge detection result with the DoG filter.
  5. Q1.2: We were able to achieve these results with fewer convolutions, making the process more computationally efficient. For the single convolution, the results are the same as before, except now the edges look a lot smoother than before and noise is a bit less apparent.

2.1 Image Sharpening

We use the Gaussian filter to sharpen images by enhancing high-frequency components. This technique involves subtracting a blurred version of the image from the original and then adding this high-frequency information back to the original image with varying intensities.

Taj Mahal Image Sharpening

We start with a sharp image (Taj Mahal) and apply different levels of sharpening to observe the effects.

Original Taj Mahal
Original Image
Taj Mahal Sharpened (Alpha: 0.25)
Sharpened (Alpha: 0.25)
Taj Mahal Sharpened (Alpha: 1)
Sharpened (Alpha: 1)
Taj Mahal Sharpened (Alpha: 4)
Sharpened (Alpha: 4)

San Francisco Street Image Sharpening

Next, we take a potentially blurry image of a San Francisco street and apply our sharpening technique.

Original SF Street
Original Image
SF Street Blurred
Blurred Image
SF Street Sharpened (Alpha: 2)
Sharpened (Alpha: 2)

New York Street Image Sharpening

Finally, we apply our sharpening technique to an image of a New York street.

Original NY Street
Original Image
NY Street Blurred
Blurred Image
NY Street Sharpened (Alpha: 2)
Sharpened (Alpha: 2)

Observations

1. Taj Mahal Image: As we increase the alpha value, we can see that the details in the image become more pronounced. The edges of the building and the textures in the sky become sharper. However, at higher alpha values (like 4), we start to see some artifacts and over-sharpening effects.

2. San Francisco and New York Street Images: For these images, we first applied a blur and then sharpened them. We can observe that:

In conclusion, while image sharpening can enhance details and make images appear clearer, it's most effective when applied to images that are already reasonably sharp. When applied to blurred images, it can improve clarity but may not fully restore the original detail. Additionally, over-sharpening (using too high alpha values) can introduce unwanted artifacts and make the image appear unnatural.

2.2 Hybrid Images

We create hybrid images that appear differently when viewed up close versus from afar. The process involves combining high-frequency components of one image with low-frequency components of another.

Cat
Cat
Dog
Dog
Hybrid Image
Hybrid Image
Cat
Cat (High Frequency)
Cat
Dog (Low Frequency)

Frequency Maps

To better understand how hybrid images work, we can visualize the frequency content of each image using Fourier transforms:

Cat FFT
Cat Frequency Map
Dog FFT
Dog Frequency Map
Hybrid FFT
Hybrid Image Frequency Map

High and Low Pass Filters

To create the hybrid image, we apply a high-pass filter to the cat image and a low-pass filter to the dog image:

Cat High Pass
High-pass Filtered Cat
Dog Low Pass
Low-pass Filtered Dog

More fun examples

Let's explore more hybrid image examples using familiar faces!

Josh Hug and John Denero
Josh Hug
Josh Hug (Using High Frequency)
John DeNero
John Denero (Using Low Frequency)
Hybrid Josh Hug and John DeNero
Hybrid Image
Josh Hug High Pass
Josh Hug High-pass Filter
John DeNero Low Pass
John Denero Low-pass Filter
Hybrid FFT
Hybrid Image Frequency Map
Derek and Nutmeg
Nutmeg
Nutmeg (Using High Frequency)
Derek
Derek (Using Low Frequency)
Hybrid Nutmeg and Derek
Hybrid Image
Nutmeg High Pass
Nutmeg High-pass Filter
Derek Low Pass
Derek Low-pass Filter
Hybrid FFT
Hybrid Image Frequency Map

These additional examples further illustrate how hybrid images combine high-frequency details from one image with low-frequency components from another. The result is an image that appears different depending on viewing distance or image size.

The hybrid image is created by combining these filtered images. When viewed up close, the high-frequency details of the cat are more visible. From a distance, the low-frequency components of the dog dominate the perception.

2.3 Gaussian and Laplacian Stacks

In this section, we demonstrate the Gaussian and Laplacian stacks for both an apple and an orange image. These stacks are crucial for the multiresolution blending process.

Apple Gaussian Stack

Apple Gaussian Level 0
Level 0
Apple Gaussian Level 1
Level 1
Apple Gaussian Level 2
Level 2
Apple Gaussian Level 3
Level 3
Apple Gaussian Level 4
Level 4

Apple Laplacian Stack

Apple Laplacian Level 0
Level 0
Apple Laplacian Level 1
Level 1
Apple Laplacian Level 2
Level 2
Apple Laplacian Level 3
Level 3
Apple Laplacian Level 4
Level 4

Orange Gaussian Stack

Orange Gaussian Level 0
Level 0
Orange Gaussian Level 1
Level 1
Orange Gaussian Level 2
Level 2
Orange Gaussian Level 3
Level 3
Orange Gaussian Level 4
Level 4

Orange Laplacian Stack

Orange Laplacian Level 0
Level 0
Orange Laplacian Level 1
Level 1
Orange Laplacian Level 2
Level 2
Orange Laplacian Level 3
Level 3
Orange Laplacian Level 4
Level 4

These Gaussian and Laplacian stacks form the basis for our multiresolution blending process, which we'll explore in the next section.

2.4 Multiresolution Blending

In this section, we demonstrate the multiresolution blending process using the apple and orange images. We'll show the masked Laplacian layers and the final blended result.

Masked Laplacian Layers

Blended Laplacian Layer 0
Layer 0
Blended Laplacian Layer 1
Layer 1
Blended Laplacian Layer 2
Layer 2
Blended Laplacian Layer 3
Layer 3
Blended Laplacian Layer 4
Layer 4
Final Blended Result
Final Blended Image
The "Oraple" - A seamless blend of an orange and an apple

The final blended image, often humorously referred to as the "Oraple", demonstrates the power of multiresolution blending. By combining the Laplacian stacks of the apple and orange images using a carefully crafted mask, we achieve a seamless transition between the two fruits. This technique allows us to create convincing composite images that smoothly blend different elements together.

Some more examples!

San Francisco Skyline and Space Blend (Irregular Mask)

San Francisco Skyline
San Francisco Skyline
Space Image
Space Image
Irregular Mask
Irregular Mask
San Francisco and Space Blend
San Francisco and Space Blend

This example demonstrates the use of an irregular mask to blend the San Francisco skyline with a space scene.

Sun and Moon Blend

Sun Image
Sun Image
Moon Image
Moon Image
Sun and Moon Blend
Sun and Moon Blend

The two drastically different textures and colors demonstrates the power of multiresolution blending.

Conclusion

This project has demonstrated how these processes can be used to enhance images, create intriguing visual effects, and seamlessly combine different visual elements.