This is just a heads up that any new posts regarding my rendering project will be posted to chrisfontas.wordpress.com. I think the new site will allow me to better showcase my work. There is new content already available to see as well as repostings of old posts I've made here on blogger.
My Portfolio
Monday, July 21, 2014
Wednesday, June 18, 2014
Spectral Rendering Part III - Dispersion
In addition to my renderer's capability to create iridescent and fluorescent materials using spectral rendering, I also wanted to include the ability to handle light dispersion in dielectric materials, which is caused by the fact that a material's index of refract is dependent on the wavelength of light passing through. The index of refraction for a given wavelength of light can be determined by using the Sellmeier equation with material-specific coefficients.
Every camera ray is randomly given a wavelength of light associated with it in the range 390-700 with a step of 5. Thus, when it hits a dispersive material, it can reflect/refract properly based on the Fresnel equation (which also is wavelength dependent). The contribution for each wavelength is multiplied by the RGB value for that wavelength (derived by integrating over the XYZ response sensitivity curves) assuming that each wavelength of light has equal intensity and is then normalized to 1 for each of the RGB color channels.
My image below is my best render thus far, but it still has some problems with it. Each diamond (which are actually two transformed instances of the same mesh - and so they share vertices) have a weird foggy haze on parts of their surfaces and I am not entirely sure where that is coming from. Also, I was hoping that the dispersion effects would be more prominent.
Some improvements I need to add include importance sampling the wavelength for each ray based on the XYZ curves instead of randomly choosing a wavelength, amongst some other minor details. Hopefully this will help with alleviating some of the above issues.
Update: I've added a second image and third image which include importance sampling the wavelengths with the proper weights and expanding the spectrum used. As you can see, the second image is brighter and more vivid.
I rendered my third image quite large at 720p and with many instances of the diamond mesh.
Every camera ray is randomly given a wavelength of light associated with it in the range 390-700 with a step of 5. Thus, when it hits a dispersive material, it can reflect/refract properly based on the Fresnel equation (which also is wavelength dependent). The contribution for each wavelength is multiplied by the RGB value for that wavelength (derived by integrating over the XYZ response sensitivity curves) assuming that each wavelength of light has equal intensity and is then normalized to 1 for each of the RGB color channels.
My image below is my best render thus far, but it still has some problems with it. Each diamond (which are actually two transformed instances of the same mesh - and so they share vertices) have a weird foggy haze on parts of their surfaces and I am not entirely sure where that is coming from. Also, I was hoping that the dispersion effects would be more prominent.
Some improvements I need to add include importance sampling the wavelength for each ray based on the XYZ curves instead of randomly choosing a wavelength, amongst some other minor details. Hopefully this will help with alleviating some of the above issues.
Update: I've added a second image and third image which include importance sampling the wavelengths with the proper weights and expanding the spectrum used. As you can see, the second image is brighter and more vivid.
I rendered my third image quite large at 720p and with many instances of the diamond mesh.
Click to see full size |
Monday, June 16, 2014
Subsurface Scattering Updates
I've been spending time updating my subsurface scattering algorithm to incorporate more recent state of the art research. The original papers I was using were written over 10 years ago and since then many improvements have been made to classical diffusion theory. Incorporating these updates have definitely been worthwhile.
My current algorithm combines techniques from four different papers:
1) I use the single-scattering term as described in the original SSS paper by Jensen:
2) For multi-scattering, I make use of a hierarchical irradiance-caching point cloud as described here:
3) I use some improved definitions of several terms as described in the following paper. These include improved boundary condition and diffusion coefficient terms amongst others.
http://naml.us/~irving/papers/deon2011_subsurface.pdf
http://naml.us/~irving/papers/deon2011_subsurface.pdf
4) Finally, I replace the standard dipole-diffusion algorithm with a hybrid extended-dipole source / Monte Carlo simulation as described here:
You can see several of my newest images below. In addition to the hue of the materials matching reality more closely, increasing the translucency of the material does not cause the model to brighten up and "glow" like it did before - instead the illumination simply becomes more blurred and soft.
Jade Dragon Marble Statue |
Scattering and Absorption Coefficients cut by two each |
Scattering and Absorption Coefficients cut by four each |
Sunday, June 15, 2014
Rough Glass Simulation
I made use of the importance sampling technique described in the following paper in order to more accurately simulate ground/rough glass and glossy mirrored surfaces:
https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf
The paper describes an extension of the widely used Cook-Torrance BRDF (which I use as my "default" shading algorithm for "normal" materials) and extends it into a BSDF (a Bidirectional Scattering Distribution Function) which is the sum of a BRDF and a BTDF (Bidirectional Transmissive Distribution Function).
Here are some of my results:
https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf
The paper describes an extension of the widely used Cook-Torrance BRDF (which I use as my "default" shading algorithm for "normal" materials) and extends it into a BSDF (a Bidirectional Scattering Distribution Function) which is the sum of a BRDF and a BTDF (Bidirectional Transmissive Distribution Function).
Here are some of my results:
Monday, June 2, 2014
Subsurface Scattering!
My subsurface scattering implementation is a two-step process. In the first step, before rendering begins, the mesh is uniformly sampled an an irradiance calculation is performed at each sample. These samples are then stored in a hierarchical point cloud represented by an octree for fast lookup.
The second step is the rendering pass. This step implements a BSSRDF (Bidirectional Surface Scattering Reflectance Distribution Function) which is the sum of two terms: a single scattering term and a multi-scattering term.
The single scattering term is used for light that enters the material and then exits again after a single bounce. It is calculated by integrating the illumination over the length of the outgoing light ray and makes use of a phase function (in my case I use the Henyey-Greenstein function) to determine the degree to which the material is anisotropic (whether the light scatters mostly forward, backward or uniformly/isotropically).
The multiple scattering term is used for light that bounces around inside the material many times before exiting. I use a diffuse dipole-light source approximation combined with the irradiance samples computed in the first step to simulate multiple scattering. One pole of the source is placed above the material and the other inside it - the distance determined by the material's properties.
Completely opaque statue rendered with a BRDF - No subsurface scattering here |
Completely isotropic single-scattering term |
Backwards anisotropic single-scattering term only with reduced extinction coefficient. |
Diffuse multi-scattering term |
Complete Combined Image |
Mildly backward-scattering anisotropic version |
Scattering and absorption terms cut by 4 each |
Lit from behind to better show translucency |
Friday, March 28, 2014
Sneak Peak - Texturing Meshes and New Lighting types
Here's a sneak peak of some new features I've been working on.
1) Diffuse, Specular and Normal mapping for polygonal meshes
UV indices are parsed from the mesh file (in this case a .obj wavefront file) and stored for use during the rendering pass. When a point on a triangle is hit, barycentric interpolation is used based on the UV texture coordinates of the triangle's vertices to find the correct texture value to apply to that point. This technique can be used for texture, diffuse and normal mapping.
2) Spot lighting and Directional lighting
Below we can see two spot lights shining on Mario (with global illumination). A spot-light is much like a point light except that it emits radiance within a cone (the solid angle and direction of which are set by the user). Any object outside of this cone of light receives no direct illumination.
And here's Mario and Luigi :-)
1) Diffuse, Specular and Normal mapping for polygonal meshes
UV indices are parsed from the mesh file (in this case a .obj wavefront file) and stored for use during the rendering pass. When a point on a triangle is hit, barycentric interpolation is used based on the UV texture coordinates of the triangle's vertices to find the correct texture value to apply to that point. This technique can be used for texture, diffuse and normal mapping.
2) Spot lighting and Directional lighting
Below we can see two spot lights shining on Mario (with global illumination). A spot-light is much like a point light except that it emits radiance within a cone (the solid angle and direction of which are set by the user). Any object outside of this cone of light receives no direct illumination.
And here's Mario and Luigi :-)
Saturday, March 22, 2014
Chess
Here's an image of a chessboard I rendered with shallow depth of field. The pinkish hue is due to diffuse inter-reflection (color bleed) due to the (unseen) pink wall behind the camera.
Here is the same scene rendered in perfect focus with a pinhole camera
Subscribe to:
Posts (Atom)