Examples for Assignment 1

Random Noise

void R2Image::AddNoise(double magnitude);
Adds noise to an image. The amount of noise is given by the magnitude in the range [0.0..1.0]. 0.0 adds no noise. 1.0 adds a lot of noise.
 
noise0.gif (12403 bytes)
0.0
noise1.gif (18518 bytes)
0.25
noise2.gif (21625 bytes)
0.5
noise3.gif (22782 bytes)
0.75
noise4.gif (23266 bytes)
1.0

Brightness

void R2Image::Brighten(double factor);
Changes the brightness of an image by scaling the original colors by a factor and clipping the resulting values to [0, 1] (factor = 0.0: all black; factor = 1.0: original; factor > 1.0: brighter). See Graphica Obscura.
 
brightness0.gif (1001 bytes)
0.0
brightness1.gif (12393 bytes)
0.5
brightness2.gif (12403 bytes)
1.0
brightness3.gif (11830 bytes)
1.5
brightness4.gif (10715 bytes)
2.0

Contrast

void R2Image::ChangeContrast(double factor);
Changes the contrast of an image by interpolating between a constant gray image (factor = 0) with the average luminance and the original image (factor = 1). Interpolation reduces contrast, extrapolation boosts contrast, and negative factors generate inverted images. See Graphica Obscura.
 
contrast0.gif (12278 bytes)
-0.5
contrast1.gif (1001 bytes)
0.0
contrast2.gif (12285 bytes)
0.5
contrast3.gif (12403 bytes)
1.0
contrast4.gif (11338 bytes)
1.7

Saturation

void R2Image::ChangeSaturation(double factor);
Changes the saturation of an image by interpolating between a gray level version of the image (factor = 0) and the original image (factor = 1). Interpolation decreases saturation, extrapolation increases it, and negative factors preserve luminance but invert the hue of the input image. See Graphica Obscura.
 
saturation0.gif (12584 bytes)
-1.0
saturation1.gif (18237 bytes)
0.0
saturation2.gif (12764 bytes)
0.5
saturation3.gif (12403 bytes)
1.0
saturation4.gif (12301 bytes)
2.5

White Balance

void R2Image::WhiteBalance(double red, double green, double blue);
Adjust the white balance of the scene to compensate for lighting that is too warm, too cool, or tinted, to produce a neutral image. Use Von Kries' method: convert the image from RGB to the LMS color space (there are several slightly different versions of this space, use any reasonable one, e.g. RLAB), divide by the LMS coordinates of the white point color (the estimated tint of the illumination), and convert back to RGB. (Image source)
 

Before correction: too warm

After correction: neutral

Histogram Equalization

void R2Image::EqualizeHistograms();
Increase the contrast of an image by "equalizing" its histogram, that is, remapping the pixel intensities so that the final histogram is flat. A low contrast image usually clumps most pixels into a few tight clusters of intensities. Histogram equalization redistributes the pixel intensities uniformly over the full range of intensities [0, 1], while maintaining the relationship between light and dark areas of the image. For a color image, convert to HSV and equalize the value channel. (UCI course notes, Wikipedia, image source)
 

Before

After

Before (color)

After (color)

Vignette

void R2Image::Vignette(double inner_radius, double outer_radius);
Darken the corners of the image, as observed when using lenses with very wide apertures (ref). The image should be perfectly clear upto inner_radius, perfectly dark (black) at outer_radius and beyond, and smoothly increase darkness in the circular ring between. Both radii are specified as multiples of half the length of the image diagonal (so 1.0 is the distance from the image center to the corner).

Note: the vignetting ring should be a perfect circle, not an ellipse. Camera lenses typically have circular apertures, even if the sensor/film is rectangular.
 

Original

Inner: 0.707, outer: 1.414

Inner: 0.5, outer: 1.0

Extract Channel

void R2Image::ExtractChannel(int channel);
Extract a channel of an image.  Leaves the specified channel intact.  Sets all other ones to zero.
 

Original

Red

Green

Blue

Quantization

void R2Image::Quantize(int nbits);
Converts an image to nbits bits per channel using uniform quantization.

The number of output levels per channel is L = 2nbits, which are evenly distributed so that the lowest level is 0.0, the highest is 1.0. Every input value is to be mapped to the closest available output level.
 

quantize1.gif (3642 bytes)
1
quantize2.gif (6953 bytes)
2
quantize3.gif (10603 bytes)
3
quantize4.gif (11920 bytes)
4
quantize5.gif (12496
    bytes)
5

Random Dither

void R2Image::RandomDither(int nbits);
Converts an image to nbits bits per channel using random dithering. It is similar to uniform quantization, but random noise is added to each component during quantization, so that the arithmetic mean of many output pixels with the same input level will be equal to this input level.
 
randomDither1.gif (6166 bytes)
1
randomDither2.gif (11112 bytes)
2
randomDither3.gif (14954 bytes)
3
randomDither4.gif (13871 bytes)
4
randomDither5.gif (13053 bytes)
5

Ordered Dither

void R2Image::OrderedDither(int nbits);
Converts an image to nbits bits per channel using ordered dithering. The following examples used the pattern
Bayer4 = 15 7 13 5
3 11 1 9
12 4 14 6
0 8 2 10

The values can be used as thresholds for rounding up or down as described in the lecture slides. There is an alternative way of using them: for each pixel at (x,y), compute i = x % 4, j = y % 4 and add (Bayer4[i][j] + 1) / 17.0 before rounding down, i.e. taking the floor. The two approaches can be shown to be essentially equivalent.
 
orderedDither1.png
1
orderedDither2.png
2
orderedDither3.png
3
orderedDither4.png
4
orderedDither5.png
5
Floyd-Steinberg Dither
void R2Image::FloydSteinbergDither(int nbits);
Converts an image to nbits per channel using Floyd-Steinberg dither with error diffusion. Each pixel (x,y) is quantized, and the quantization error is computed. Then the error is diffused to the neighboring pixels (x + 1, y), (x - 1, y + 1), (x, y + 1), and (x + 1, y + 1) , with weights 7/16, 3/16, 5/16, and 1/16, respectively.
 
FloydSteinbergDither1.gif
1
FloydSteinbergDither2.gif
2
FloydSteinbergDither3.gif
3
FloydSteinbergDither4.gif
4
FloydSteinbergDither5.gif
5

Blur

void R2Image::Blur(double sigma);
Blurs an image by convolving it with a Gaussian filter. In the examples below, the Gaussian function used was

  G(x) = exp(-x^2/(2*sigma^2))
 
and the number below each image indicates the sigma of the filter. You can limit the filter width to ceil(3*sigma)*2+1.
 
blur0
original
blur1
0.125
blur2
0.5
blur3
2
blur4
8

Edge detect

void R2Image::EdgeDetect();
Detect edges in an image by convolving it with an edge detection kernel and taking absolute values. In the example below, the kernel used was
-1
-1
-1
-1
8
-1
-1
-1
-1

 
edgeDetect.png

Convolve

void R2Image::Convolve(const R2Image& filter);
Read a kernel matrix from a text file and convolve the image with it. For example, here is a possible input file for the kernel, specifying an edge detection operation. The format of the file is "width height nchannels" in the first row, and then pixel values in row, column, channel order. Note that convolution can be used to implement a variety of image processing algorithms including blurring, sharpening, edge detection and thinning, by appropriate choice of kernel.
 

Original

After convolution

Scale

Image *R2Image::Scale(double sx, double sy);
Scales an image in x by sx, and y by sy. The result depends on the current sampling method (point, bilinear, or Gaussian). In the example below, the size of the Gaussian filter is 3x3.
 
scale1.gif (26874 bytes)
Point
scale2.gif (29226 bytes)
Bilinear
scale3.gif (27586 bytes)
Gaussian
The size of the Gaussian blur kernel is the inverse of the minification factor, rounded up to the closest odd number greater than or equal to 3.

Seam Carving

void R2Image::SeamCarve(int width, int height);
Seam carving resizes an image, changing its aspect ratio while preserving the shapes of salient objects. The idea is to remove (for downscaling) or duplicate (with interpolation, for upscaling) pixels from "unimportant" regions of the scene, whose modification does not affect the important regions and is not easily noticeable. Follow the algorithm described in [Avidan07]. (Top image source)
 

Before

After downscaling

Before

After upscaling

Crop

Image* R2Image::Crop(int x, int y, int w, int h);
Extract a sub image from the image, at position (x,y), width w, and height h.
 



Crop(42,147,85,93)

Rotate

Image *R2Image::Rotate(double angle);
Rotates an image by the given angle, in radians (a positive angle implies counter-clockwise rotation) . The result depends on the current sampling method (point, bilinear, or Gaussian). In the example below, the size of the Gaussian filter is 3x3.
 
rotate1.gif (13595 bytes)
Point
rotate2.gif (13842 bytes)
Bilinear
rotate3.gif (13617 bytes)
Gaussian

Composite

void R2Image::Composite(Image *top, int operation);
Composites the top image over the base image, using the alpha channel of the top image as a mask. You can use an image editor of your choice to generate the input images you need.
 

Base

Top

Alpha channel of top

Result

Fun

void R2Image::Fun();
Warps an image using a creative filter of your choice. In the following example, each pixel is mapped to its corresponding scaled polar coordinates. The artifacts of point sampling are very noticeable in this example.
 
funk.gif (8490 bytes)
Original
fun1.gif (6775 bytes)
Point
fun2.gif (10878 bytes)
Bilinear
fun3.gif (10541 bytes)
Gaussian

Morph

void R2Image::Morph (const R2Image& target, R2Segment *source_segments, R2Segment *target_segments, int nsegments,
  double t, int sampling_method);
Morph two images using [Beier92]. *this and target are the before and after images, respectively. source_segments and target_segments are corresponding line segments to be aligned.  t is the morph time: a number between 0 and 1 indicating which point in the morph sequence should be returned.
 

t = 0

t = 0.11

t = 0.22

t = 0.33

t = 0.44

t = 0.56

t = 0.67

t = 0.78

t = 0.89

t = 1
And here is an animation of the sequence:

Exposure Fusion

void R2Image::FuseExposures(int nimages, R2Image **images, double wc, double ws, double we);
Combine multiple images of a scene into a single image, to accommodate scenes with high dynamic range or widely varying object depths. Each output pixel is an appropriate blend of corresponding input pixels, giving higher weight to input pixels that have greater "quality". The quality measure is the product of local contrast, saturation, and well-exposedness, the terms weighted by exponents wc, ws and we respectively. For combining multiple exposures, all three exponents should be non-zero. For combining a focus stack into an image where everything is sharp, set all exponents other than the sharpness parameter (ws) to zero. Follow the algorithm described in [Mertens07]. (Image source)
 

Lowest exposure

 

 

Highest exposure

Fusion result