In this assignment you will create a simple ray tracing program. The program will read in a scene and output an image.
You should use the following skeleton code (COS426_assignment3.zip) as a starting point for your assignment. We provide you with several files, but you should mainly changeraytrace.cpp
.
raypro.cpp
: Parses the command line arguments, and calls the ray tracer function.rayview.cpp
: Interactive program for viewing scenes (hitting the 'S' key calls the ray tracer function for the current camera view).raytrace.cpp
: Implements the ray tracer -- this is the main file you should edit.R3Scene.cpp
: code to support reading and representing scenes.R2
: code to support operations on 2D points, vectors, lines, segments, and imagesR3
: code to support operations on 3D points, vectors, lines, segments, planes, and meshes.jpeg
: code to read/write JPEG filesraypro.[vcproj/sln]
: Project file for Visual Studio .NET 2005 on Windows.Makefile
: Make file for Mac.After you copy the provided files to your diectory, the first thing to do is compile the program. If you are developing on a Windows machine and have Visual Studio .NET 2005 installed, use the provided project files to build the program. If you are developing on a Mac, type
make
in the assignment3 directory. In either case, two executables will be created:raypro
andrayview
(orraypro.exe
andrayview.exe
).raypro
is the program that reads a scene and outputs a mesh, whilerayview
provides a way for you to visualize scenes interactively before executing the ray tracer. Of course, you are welcome to changerayview.cpp
any way you wish to provide visualizations.The skeleton code is able to read scenes in a simple scene file format, This format was created to provide the features required by this assignment -- a simple scene graph along with materials, lights, and cameras. We provide several sample scenes that you can use to test your program. However, of course, you are not limited to these files -- you are encouraged to create your own files, either by hand, or with your programs from assignment #2. Note that it is very easy to combine multiple meshes and scenes in a single scene file using the "mesh" and "include" commands, and it is convenient to size and position scene elements with the transformations provided by the "begin" and "end" commands.
The user interface for this assignment was kept as simple as possible, so you can concentrate on the ray tracing issues. Theraypro
program runs on the command line. It reads a scene from a .scn file (the first program argument) and writes an image to a new file (the second program argument). The program allows a few program arguments that control features of the raytracer. For example, ...creates a 512x512 image (% raytrace in.scn out.jpg -width 512 -height 512
out.jpg
) of the scene inin.scn
.Alternatively, you could view the scene interactively with OpenGL using ...
in which case you could select a desired view and image size interactively and then hit the 'S' key to begin raytracing to produce an image of the scene with the current camera parameters. The rayview program allows you to view the bounding box hierarchy ('B') and to print the current camera parameters ('% rayview in.scn -image out.jpg
') in a form that can be added into a .scn file. Of course, you are welcome to create your own program arguments by editing
raypro.cpp
and/orrayview.cpp
. However, please do not change the program arguments already provided.
The assignment is worth 20 points. The following is a list of features that you may implement (listed roughly from easiest to hardest within each group). The number in front of the feature corresponds to how many points the feature is worth. The features in bold face are required. The other ones are optional. Refer to the examples web page to see images of inputs and outputs for several of these filters.For each feature that you implement, include a screenshot with your writeup. Features without screenshots will not be graded.
- Basic Ray Generation:
- (1) Generate a ray for each pixel: Create a R3Ray through points on a regular grid of pixels on the view plane, as defined by the camera parameters (eye point, towards direction, up direction, xfov, and yfov) and viewport (image width and height).
- Ray-Primitive Intersection:
- (2) Intersect a ray with a sphere: Implement a version of R3Intersects that takes in an R3Ray and an R3Sphere as arguments and returns whether or not the ray intersects the surface of the sphere, and if so what is the position, normal, and parametric value (t) at the first intersection point.
- (1) Intersect a ray with an axis-aligned box: Implement a version of R3Intersects that takes in an R3Ray and an R3Box as arguments and returns whether or not the ray intersects the box, and if so what is the position, normal, and parametric value (t) at the first intersection point.
- (2) Intersect a ray with a triangle mesh: Implement a version of R3Intersects that takes in an R3Ray and an R3Mesh as arguments and returns whether or not the ray intersects the mesh surface, and if so what is the position, normal, and parametric value (t) at the first intersection point.
- (2) Intersect a ray with an axis-aligned cylinder: Implement a version of R3Intersects that takes in an R3Ray and an R3Cylinder as arguments and returns whether or not the ray intersects the cylinder, and if so what is the position, normal, and parametric value (t) at the first intersection point.
- (2) Intersect a ray with an axis-aligned cone: Implement a version of R3Intersects that takes in an R3Ray and an R3Cone as arguments and returns whether or not the ray intersects the cone, and if so what is the position, normal, and parametric value (t) at the first intersection point.
- Ray-Scene Intersection:
- (2) Intersect a ray with a scene: Implement a version of R3Intersects that takes in an R3Ray and an R3Scene as arguments and returns whether or not the ray intersects the scene, and if so what is the scene graph node, position, normal, and parametric value (t) at the first intersection point. This function should traverse the scene graph hierarchy recursively, intersecting the ray with all nodes, and returning the intersection with minimal parametric value (t) along the ray.
- (1) Handle scene traversals with modeling transformations: Augment your recursive scene graph traversal to handle modeling transformations as it traverses the scene graph hierarchy. This feature requires transforming a ray by the inverse of the transformation prior to decending into a node's children and then transforming the intersection point and normal by the transformation and recomputing the parametric value t before returning them to the parent.
- (1) Make an interesting scene: Submit the .scn file, and include a screenshot of your scene.
- (1) Accelerate ray-scene intersection with bounding box checks: Augment your code for ray-scene intersection to check whether a ray intersects the bounding box for each child of a scene graph node before calling R3Intersects recursively for that child. If the ray hits the bounding box, check the parametric value along the ray against the least parametric value seen so far for a ray-primitive intersection previously found during the same recursive traversal and avoid decending into the child if the parametric value of the ray-bbox intersection is greater. Note that the bounding box for each node is stored in the coordinate system of the parent node to facilitate these checks.
- (1) Accelerate ray-scene intersection by caching the last intersection: Augment your code for ray-scene intersection to: 1) remember the node in the scene graph that yielded the closest intersection last time a ray-scene intersection was performed, 2) check that node first to establish an upper bound on the parametric value (t) of the closest ray intersection (if the new ray intersects the primitives of that node again), and then avoid recursion into scene graph nodes whose ray-bbox intersections have parametric values greater than the established upper bound.
- (1) Accelerate ray-scene intersection by visiting children nodes in front-to-back order: Augment your code for ray-scene intersection with bounding box checks for each node to: 1) intersect the ray with the bounding boxes of all children nodes, 2) sort the intersected children nodes front-to-back with respect to the parameteric value along the ray for their ray-bbox intersections, 3) recurse into the children nodes in that sorted order, and 4) terminate the search if a ray-primitive intersection is found at a parametric value less than all remaining ray-bbox interesections.
- (2) Accelerate ray-scene intersection using a grid: Augment your code for ray-scene intersection to index all primitives in the scene spatially with a grid, trace rays through grid cells in front-to-back order, and perform ray-primitive intersections only for primitives residing in grid cells as they are traversed, keeping track of the first hit and terminating when no closer hits are possible.
- (3) Accelerate ray-scene intersection using an octree or BSP tree: Augment your code for ray-scene intersection to index all primitives in the scene spatially with an octree or BSP tree, trace rays through cells in front-to-back order, and perform ray-primitive intersections only for primitives residing in cells as they are traversed, keeping track of the first hit and terminating when no closer hits are possible.
- Lighting:
- (2) Phong Illumination: Compute the reflected radiance at every ray-primitive intersection by evaluating the Phong reflectance model for every directional, point, and spot light in the scene. The ambient term (ka) should be added only once (not per light), while the diffuse and specular terms should be added for every light with ( kd * (N dot L) + ks * (V dot R)n) ) * IL, where IL is the irradiance due to a directional, point, or spot light source according to the equations provided in the .scn file format description. Note that point and spot light sources should include attenuation of the light power with distance, and spot light sources should include divergence from the central angle.
- Texture mapping:
- (1) Texture mapping: Implement texture mapping to modulate the diffuse and/or specular reflectance properties across a surface. For example, for every ray intersection at a surface point with texture coordintes (u,v), you could use Texture(u,v) * kd, rather than kd in the diffuse reflection calculations.
- (1) Bump mapping: Implement bump mapping to provide the illusion of surface roughness. For every ray intersection at a surface point with texture coordinates (u,v) and normal N, you should use a function of Texture(u,v) and N to modulate the surface normal before performing reflection calculations. The beginning of this tutorial covers how to modify an original surface normal N to obtain the bump-mapped surface normal.
- Shadows:
- (1) Shadow rays: For every direct illumination calculation, cast a "shadow ray" from the reflection point to the light source position and include the contribution of that light source only if the shadow ray is not blocked by another surface in the scene.
- (2) Area lights and soft shadows: Define area lights via "light emitting triangles," which are triangles with the emission color of the light, (e_r, e_g, e_b) set to nonzero (triangles with zero emission color should be treated normally). Then for each area light, cast a small number n (say n = 16) of rays towards random points on the triangle, and light based on the proportion of rays which are not blocked by other surfaces in the scene. For large area lights, this should give a penumbra, or region where the light source is only partially concealed. To generate random points within a triangle, one can sample on a quadrilateral, and discard points outside the triangle.
- Global Illumination:
- (2) Indirect specular reflection: Trace a secondary ray from every ray-surface intersection in the direction of perfect specular reflection, compute the irradiance IR for that ray recursively, and include it into your illumination calculation using ks * IR. Rays should be traced up to the specified maximum number of ray intersections (max_depth) or until the luminance of the radiance along a ray is below a threshold (min_luminance).
- (1) Transmission: Trace a secondary ray from every ray-surface intersection in the direction of perfect transmission (without refraction), compute the irradiance IT for that ray recursively, and include it into your illumination calculation using kt * IT. Of course, this should be done for non-opaque surfaces -- i.e., ones where kt is non-zero. Rays should be traced up to the specified maximum number of ray intersections (max_depth) or until the luminance of the radiance along a ray is below a threshold (min_luminance).
- (2) Refraction: Trace a secondary ray from every ray-surface intersection in the direction of perfect refraction according to Snell's Law (instead of in the direction of perfect transmission), compute the irradiance IT for that ray recursively, and include it into your illumination calculation using kt * IT. Of course, this should be done for non-opaque surfaces -- i.e., ones where kt is non-zero. Rays should be traced up to the specified maximum number of ray intersections (max_depth) or until the luminance of the radiance along a ray is below a threshold (min_luminance).
- (3) Distributed ray tracing: Trace multiple secondary rays from every ray-surface intersection by randomly sampling outgoing directions, compute the irradiance IR,i for each of these rays recursively, perform a full Phong lighting calculation for each ray, and sum the results to produce the outgoing radiance for the intersection. Rays should be traced up to the specified maximum number of ray intersections (max_depth) or until the luminance of the radiance along a ray is below a threshold (min_luminance).
- (3) Omni-directional lighting. Real-world lighting does not consist of a finite number of point light sources, but rather, every point can be considered to be illuminated by an omnidirectional lighting environment. We can capture examples of this lighting with photographs, and store it in an image known as an environment map. We suggest using the environment maps of Paul Debevec, which are available as 2D images in both a spherical coordinate representation, and a representation based on 6 sides of a cube (environment maps are defined for all unique directions in 3D, so they are 2D functions). The environment maps are available in various formats, including PFM (a format for storing arrays of floats, with high dynamic range; PFM reading code), and TIF images (you can convert these to JPEG if this makes reading easier). Implement the command line option "-env filename" to read in environment maps in a format of your choosing, and calculate ambient lighting from the map. Show a screenshot of a strongly specular curved object correctly reflecting the environment map.
- Camera Effects:
- (1) Camera antialiasing: Rather than using a uniform sampling of the camera plane (which leads to aliasing) implement a random sampling per pixel to compute an antialiased result. Add an additional command line option "-samples
" that provides the number of samples per-pixel. Note: if you are going to attempt the other points in this section, you should start with this first, as all others that expect multiple samples per pixel build on this. - (1) Motion blur: Just as cameras have a finite (as opposed to infinitessimal) lens, they also have a finite shutter speed. Therefore, a photograph is not a snapshot of incident radiance for one "instant" in time, but rather the average over the time the shutter is open. This comes into consideration when photographing moving scenes; if the objects are moving fast enough, the image will be blurred. Implement the command line option "-motion [dx] [dy] [dz]" where the float arguments indicate the translation of the camera during the shutter interval.
- (2) Reconstruction filters: Rendering is an image sampling problem, where the function to be sampled is the light transport equation relative to the camera plane. As you know, simple averaging of samples is equivalent to a box reconstruction filter. Instead, use additional filters, such as the Gaussian filter, Mitchell, or Lanczos over several pixels to produce a result with less aliasing. Include a blowup of two images comparing the difference with the box filter. Implement the command line option "-filter [int]" where 0 indicates box, and other integers are used for different filters. You can assume a reasonable filter width (such as 4 pixels).
- Debugging
- (3) Debugging. Modify rayview so that rays traced by R3Intersects() are visualized by GL_LINES, for pixels which have x and y coordinates that are multiples of 25. The primary ray, as well as secondary rays such as for shadowing or reflection, should be visualized. After starting the render with S, it should be possible to move the camera in rayview and see the "debugging rays" as they are cast out from the position of the camera when S was pressed. Also, modify your renderer so that it renders only a scanline at a time, and at the end of each scanline, the preview of the render is visualized by a textured quad (representing what the camera sees), which is placed in front of the camera position when S was pressed. OpenGL assumes textures have widths and heights that are powers of 2, so feel free to assume/set the window size to be a power of 2. You may wish to use this function which creates a new GL texture ID containing a given R2Image.
By implementing all the required features, you get 14 points. There are many ways to get more points:
For images or movies that you submit, you also have to submit the sequence of commands used to created them, otherwise they will not be considered valid.
- implementing the optional features listed above;
- (1) submitting one or more images for the art contest,
- (1) submitting a
.mpeg
or.avi
movie showing images generated with a moving camera, a dynamic scene, and/or smoothly varying parameter settings,- (2) winning the art contest.
It is possible to get more than 20 points. However, after 20 points, each point is divided by 2, and after 22 points, each point is divided by 4. If your raw score is 19, your final score will be 19. If the raw score is 23, you'll get 21.25. For a raw score of 26, you'll get 22.
Extra credit points cannot replace the required features (bold items). Your final score will be calculated by adding 14 to the number of extra credit points you achieve, applying the above formula for diminishing returns, and then subtracting the number of required points that you missed.
You must submit one archive (zip or tar file). Use your Princeton LDAP username for the archive's name. It should have the following directory structure inside the zip:
- [your username]/ (i.e. cdecoro/)
- source/
- the complete source code, along with an updated Visual Studio project file (if you added any source files);
- art/
- the
.mpeg
or.avi
movie for the movie feature (optional);- the images for the art contest (optional)
- writeup/
- assignment3.html, containing the writeup (see below)
- script/
- runray.[bat|sh], a test script that will run in this directory, and take all its inputs from this directory.
- all input files for the test script
The writeup should be a HTML document called assignment3.html. You will use the writeup to demonstrate the effects of the features you have implemented. List each implemented feature in the format given above, and along with the feature name, include a rendered picture (or pictures) that demonstrates it. The caption must indicate the regions of interest. If it is reasonable to demonstrate multiple features in a single picture, you may do so, for example, the entire ray-primitive intersection features can be demonstrated with a single, well-chosen scene.
Some features that improve quality will require a comparison of two or more images, for example, showing the effect of various reconstruction filters, motion blur, etc. Features that improve speed (octrees, grid accelerators) do not require a picture, but should include a brief comparison of performance on a scene with and without acceleration.
The included script must generate all pictures shown in the writeup, and run entirely in the script/ directory. Please make sure that you include all input files.
NOTE: Please convert your art contest and writeup images to JPEG format before submitting them (to save space).
Should you choose to create a video, there are many options for video encoders available, however we have found that FFMPEG works well and runs on both Windows and Mac. MEncoder is another option.
Always remember the late policy and the collaboration policy.
A few hints:
- Do the simplest steps first and test/debug every step before going to the next one.
- There are functions to manipulate 3D geometric primitives in
R3
.- Send mail to the cos426 staff.
- The code should compile and link as-is (make sure to try it out before adding your own code!). If you have problems linking to libjpeg, you can simply remove the -DUSE_JPEG flag from the makefile (or #undef USE_JPEG in your code) to disable JPEG support.
- If you should happen stumble upon anything that you suspect is a bug in the framework code (we're not perfect either!) please mail the preceptor as soon as possible so that it can be corrected.
- If you are stuck/confused/lost/having a problem, meet with the preceptor. He is readily available most days of the week.