brushFilter = function( image, radius, color, vertsString )Draw a filled circle centered at every given location using the given radius and color.
|
paint alpha channel (useful for Composite) |
brightnessFilter( image, ratio )Changes the brightness of an image by blending the original colors with black/white color in a ratio. (When ratio>0, we blend with white to make it brighter; when ratio<0, we blend with black to make it darker.)
-1 |
-0.5 |
0 |
0.5 |
1 |
contrastFilter( image, ratio )Changes the contrast of an image by interpolating between a constant gray image (ratio=-1) with the average luminance and the original image (ratio=0). Interpolation reduces contrast, extrapolation boosts contrast, and negative factors generate inverted images. Use the following formula which is mentioned in Wiki_Contrast:
value = (value - 0.5) * (tan ((contrast + 1) * PI/4) ) + 0.5;
-1 |
-0.5 |
0 |
0.5 |
1 |
gammaFilter( image, logOfGamma )Changes the image by applying gamma correction, V_out = Math.pow(V_in, \gamma), where \gamma = Math.exp(logOfGamma).
-1 |
-0.4 |
0 |
0.4 |
1 |
vignetteFilter( image, value )Darkens the corners of the image, as observed when using lenses with very wide apertures (ref). The function takes the innerRadius and outerRadius as inputs. The image should be perfectly clear up to innerRadius, perfectly dark (black) at outerRadius and beyond, and smoothly increase darkness in the circular ring in between. Both are specified as multiples of half the length of the image diagonal (so 1.0 is the distance from the image center to the corner).
innerR:0.25, outerR:1 |
innerR:0.5, outerR:1 |
innerR:0.25, outerR:0.75 |
innerR:0, outerR:0.75 |
histeqFilter( image )Increase the contrast of the image by histogram equalization in HSL’s L channel, that is, remapping the pixel intensities so that the final histogram is flat. A low contrast image usually clumps most pixels into a few tight clusters of intensities. Histogram equalization redistributes the pixel intensities uniformly over the full range of intensities [0, 1], while maintaining the relationship between light and dark areas of the image.
Before |
After |
saturationFilter( image, ratio )Changes the saturation of an image by interpolating between a gray level version of the image (ratio=-1) and the original image (ratio=0). Interpolation decreases saturation, extrapolation increases it, and negative factors preserve luminance but invert the hue of the input image. See Graphica Obscura, its parameter
alpha=1+ratio
in our slider. -1.0 |
-0.5 |
0 |
0.5 |
1 |
whiteBalanceFilter( image, hex )Adjust the white balance of the scene to compensate for lighting that is too warm, too cool, or tinted, to produce a neutral image. Use Von Kries method: convert the image from RGB to the LMS color space (there are several slightly different versions of this space, use any reasonable one, e.g. RLAB), divide by the LMS coordinates of the white point color (the estimated tint of the illumination), and convert back to RGB.
Before correction: too warm |
After correction: neutral |
given white hex: #cee2f5 |
given white hex: #f5cece |
histMatchFilter = function( image, refImg, value )
Adjusts the color/contrast of the input image
by matching the histgram to refImg
images. value
can be used to control whether match the luminance or rgb channel.
The results in the first row below use the histogram matching in rgb channels, while the results in the second row matched only in luminance.
reference image: town |
reference image: flower |
reference image: town |
reference image: flower |
gaussianFilter( image, sigma )Blurs an image by convolving it with a Gaussian filter. In the examples below, the Gaussian function used was
Math.round(3*sigma)*2+1
. 1 |
2 |
3 |
4 |
5 |
sharpenFilter( image )Sharpen edges in an image by convolving it with the edge kernel as belows and add it to the original image:
-1 |
-1 |
-1 |
-1 |
8 |
-1 |
-1 |
-1 |
-1 |
edgeFilter = edgeFilter = function( image )Convolve the image with the edge kernel. We invert the image (pixel = 1 - pixel) in the example below for better visualization.
man.jpg |
flower.jpg |
medianFilter( image, winR )Blurs an image by replacing each pixel by the median of its neighboring pixel((2*winR+1)x(2*winR+1)). The results below are generated by doing median filter in RGB channel separately. You can also sort the pixels using the luminance only.
1 |
2 |
3 |
4 |
5 |
bilateralFilter( image, sigmaR, sigmaS )Blurs an image by replacing each pixel by a weighted average of nearby pixels. The weights depend not only on the euclidean distance of pixels but also on the pixel difference, for the pixel difference it could either be luminance difference or L2 distance in color space. Consider the pixel I(i,j) located in (i,j), the weight of pixel I(k,l) follows the following equation:
sigmaR
by sqrt(2)*winR
. If we don't take this factor into consideration, the filter does not do any blurring and the result looks unchanged.
sigmaR=1, sigmaS=1 |
sigmaR=2, sigmaS=1 |
sigmaR=3, sigmaS=0.5 |
sigmaR=4, sigmaS=2 |
sigmaR=5, sigmaS=3 |
quantizeFilter( image, numBits )Converts an image to numBits bits per channel using uniform quantization.
The number of output levels per channel is 2^numBits,
which are evenly distributed so that the lowest level is 0.0, the highest is
1.0. Every input value is to be mapped to the closest available output level.
1 |
2 |
3 |
4 |
randomFilter( image, numBits )Converts an image to numBits bits per channel using random dithering. It is similar to uniform quantization, but random noise range in each unit is added to each component during quantization, so that the arithmetic mean of many output pixels with the same input level will be equal to this input level.
1 |
2 |
3 |
4 |
orderedFilter( image, numBits )Converts an image to numBits bits per channel using ordered dithering. The following examples used the pattern
Bayer4 | = | 15 | 7 | 13 | 5 |
3 | 11 | 1 | 9 | ||
12 | 4 | 14 | 6 | ||
0 | 8 | 2 | 10 |
1 |
2 |
3 |
4 |
floydFilter( image, numBits )Converts an image to numBits per channel using Floyd-Steinberg dither with error diffusion. Each pixel (x,y) is quantized, and the quantization error is computed. Then the error is diffused to the neighboring pixels (x + 1, y), (x - 1, y + 1), (x, y + 1), and (x + 1, y + 1) , with weights 7/16, 3/16, 5/16, and 1/16, respectively.
1 |
2 |
3 |
4 |
scaleFilter( image, ratio)Scales an image in width and height by ratio. The result depends on the current sampling method (point, bilinear, or Gaussian). In the example below, gamma=1, the window radius of the Gaussian filter is 3, ratio = 0.7.
Point |
Bilinear |
Gaussian |
rotateFilter( image, radians, sampleMode )Rotates an image by the given angle, in radians (a positive angle implies clockwise rotation) . The result depends on the current sampling method (point, bilinear, or Gaussian). We set sigma of the gaussian filter to 1.0, and the window radius of the Gaussian filter to 3.0. In the example below, radians = 0.2 * pi.
Point |
Bilinear |
Gaussian |
swirlFilter( image, radians, sampleMode )Warps an image using a creative filter of your choice. In the following example, each pixel is mapped to its corresponding scaled polar coordinates, here radians = 0.4 * pi.
Point |
Bilinear |
Gaussian |
compositeFilter( backgroundImg, foregroundImg)Composites the foreground image over the background image, using the alpha channel of the foreground image to blend two images. The alpha channel can be obtained by pushing a third image or painting a third image using "Brush". Gaussian smoothing the painted alpha channel usually gives better result.
backgroundImg |
foregroundImg |
foregroundImg(alpha channel) |
Result |
backgroundImg |
foregroundImg |
foregroundImg(alpha channel) |
Result |
morphFilter( initialImg, finalImg, lines, alpha )Morph two images using [Beier92].
initialImg
and finalImg
are the before and after images, respectively. lines
are corresponding line segments to be aligned.
alpha
is the morph time: it can be a number between 0 and 1 indicating which point in the morph sequence should be returned, or can be (start:step:end) to define a morph sequence. In terms of parameter choosing, we set p = 0.5, a = 0.01, and b = 2.
0 |
0.11 |
0.22 |
0.33 |
0.44 |
0.56 |
0.67 |
0.78 |
0.89 |
1 |
alpha=(0:0.1:1)
for
the example images and morph lines provided with the assignment zip,
together with a still frame at alpha=0.5
:paletteFilter( image, colorNum )extracts colorNum colors as a palette to represent colors in the image. Here we use k-means method to extract color palette in the image with grid acceleration.
colorNum = 2 |
colorNum = 3 |
colorNum = 5 |