Thursday, August 13, 2015

Raycasting and Phong Shading

We took our time and stayed with just one render technique and a simple shading model. This time we leave complex transformation calculations and go to more photo realistic render and shading algorithm.

Raycasting

Raycasting is the preversion of Raytracing. It simply shoots several rays through a plane and calculates intersection queries for each surface existing in our scene. Each ray returns the modified color of the closest found surface. With modified I mean after going through a shading algorithm.

The number of rays depends on the number of pixels we need to render, or the number of rays is equal to the size of our pixel and depth buffer. All rays have the camera coordinate as starting point. After transformation into view space camera is set to (0/0/0) and the plane where the rays get their direction vector is the near plane we discussed in 3D viewing section.



The size of the plane can be taken from our viewport coordinates like fov, screen width and height and near plane. You can see the corner points of that view plane in the figure above. But we don't want to get the color at the corner, but the one where the ray goes exactly through the center of the pixel (marked as black dots).

The best way to do so is to calculate the distance from one pixel to another. That means we divide the length of the view plane by the number of pixels for each side.



After that we need a starting point. Using the value of the top-left corner for x = 0 and y = 0 we would hit exactly the corner of the pixel, what we DON'T WANT. We have to add a half step, then we hit exactly the center of the destined pixel. We mustn't forget to mirror y coordinates.



Where x and y are pixel coordinates from 0 to width-1 and from 0 to height-1 and where V is the point on the view plane.

The near value can be skipped by the way, meaning that the view plane is 1 unit away from the camera. You only need near when you don't want to draw a pixel outside of the viewport.

The last problem we need to fix is the different length of each ray. We would have problems with our depth buffer if 1 unit has different length (like mixing up meters and yards for different pixel in the depth buffer). We need to normalize the direction vector for the ray. Our Ray formula looks like this:



In case for Raycasting Start will be a 0-point. That's the position where our camera is in view space.

Ray Intersections

Rays have to do lots of intersection calculations to figure out if they actually "see" a surface. More accurately there are width*height*(number of surfaces) calculations to be done. Therefore Raycasting and later Raytracing are not very efficient.

Ray-Plane-Intersection

We were talking about surfaces most of the time, but why is this subsection called Ray-Plane-Intersection? It's because our triangulate surface IS a plane. The plane equation is



where N is the normal vector of the surface, P0 is a corner point of the plane (surface) and P is an unknown point on the plane. A/B/C are the normal's x, y and z values and D is the negative DOT-product of the normal and a corner point.

Replacing x, y and z with the ray formula we can calculate the distance t.



As long as N and v are not perpenticular (meaning ray's direction is parallel to plane's surface) we will always hit the plane. Putting t into our ray equation we get our hit, but one thing still has to be done. There is a chance that the hitting point is NOT inside of a triangle.


There are a few ways to figure out if the point is inside or outside of the triangle. A Same Side test or the use of barycentric coordinates. Both methods are described here. In my eyes the same side test would take longer, I suggest the use of barycentric coordinates. How to calculate them was already explained in my previous post 2D Texture Mapping.

I suggest to use Möller-Trumbore intersection algorithm which includes a given ray. With this method we can calculate s and t directly and ask if the conditions for a point inside a triangle are true.

Ray-Sphere-Intersection

Even a sphere is built with multiple triangles, so why is there an extra formula for spheres? That's simple: The sphere is not here for his own calculations, but to group several surfaces into one sphere. Only if the sphere is hit by the ray we shall calculate possible intersections with the surfaces inside of it.

3 ways how a ray intersects with a sphere

The sphere equation goes



where C is the sphere's center. Replacing x, y and z with the ray equation we get



where S is the ray's starting point and v is the ray's direction vector. The formula is a bit long, we calculate the distance between sphere's center point and ray's starting point to get d = C-S.



Because we want to get t we will group all parameters after t and get the following squared equation:



or in short



All that's left is to solve this squared equation



The formula looks quite long, but we have the option to normalize v. That way the length of v is 1 and we can remove it from the formula. We can also divide by 2 easily to get rid of the fraction.



The result of the square root's content (not the square root's result) decides which case we approached (see figure above).

  • result > 0 => 2 hits (take the closest one, return true)
  • result = 1 => 1 hit (return true)
  • result < 0 => 0 hits (return false)

Phong shading

Phong shading is the combination of 3 different color reflections:
  • Ambient reflection
  • Diffuse reflection
  • Specular reflection
The sum of these reflection types will return the color drawn in the pixel. The last two types are summed up for each light source.



In all reflection types I will use M for material and L for light source.

Vectors needed for Phong shading (H is only used in Blinn-Phong shading)

Ambient reflection

With ambient reflection we simulate global illumination. That means we don't actually go after the light's physics.


Attenuation

To explain attenuation you better use a flashlight and enlighten a dark room (i.e. your room at night). If you are close to the wall it shines pretty bright, but try to enlighten that wall while walking back. You will see that the wall isn't so bright like close up.

Attenuation can be seen as light getting weaker with greater distance. But some light sources gets weaker differently. Therefore we use 4 components to calculate scalar value attenuation.

  • constant attenuation
  • linear attenuation
  • quadratic attenuation
  • distance between point and light source

Diffuse reflection

The diffuse part is a mix of the following components:
  • surface color (i.e. texture at P)
  • material diffuse color
  • light diffuse color
  • angel between surface's normal and light source from point (dot product)
the dot product between light direction and normal (if both are normalized) returns cos(x) which contains a value between -1 and +1 (like in lambert shading). We can only work with positive values, negative values shall become 0.



Specular reflection

Last but not least we need the so called specular effect which is the shiny glowing you see on metallic objects or bells. We need
  • material specular color
  • light specular color
  • material shining constant
  • direction of reflected light
  • direction to camera
First we need the reflection of an incoming vector. With a given normal we can calculate



where R is the reflected vector, This formula will be used a lot in Raytracing where we can simulate mirror effects.

V is the incoming vector and N is the normal. In this case, the incoming vector is (P-L), the opposite direction of what we needed for diffuse reflection. Now we need the direction to our camera. After the scene transformed the camera's coordination became (0,0,0), where we can use V = -P.


Framework

With raycasting we need some new classes. Of course I updated the previewsly greated Material class
  • Renderer3DRaycasting: derived from Renderer3D. render function and raycasting function
  • Object3D: added abstract getNormal() function (won't be abstract anymore in next issue)
  • Light: Point light class with shading attributes
  • Material: class with shading attributes
  • Ray: line segment with start point and direction. Essential for raycasting
  • Scene3D: added material and light source list and seperated initialize functions into 4 functions
  • Shader: class with shader functions (phong shading for now)
I figured out that cmath's abs(double) function doesn't work equaly between VC++ and GNU compiler. It took me an hour to realize that. My tip: NEVER USE abs(double).

Raycasting works "per pixel", which is totally different to rasterization. We need to ask each surface for intersection and only take the one closest to our camera. If there is no surface in that pixel's direction, keep the background color. Otherwise get intersection point and continue with phong shading.

In phong shading function, ask if we have at least one light source and a surface to shade. For each light source, calculate attenuation and ambient, diffuse and specular intensity. Use surface color only on ambient and diffuse.

Texture coordinates for Cube and Sphere

Some of you might have noticed that I skipped 3D texture mapping. I'm sorry about that. I added predefined texture coordinates for cube and sphere. That has the advantage that you can use one function to add textures. To see how that works, there are 2 images in folder "sources".

Texture for cube, to see which area has to be placed

World map for sphere.

Output

For comparison, A rendered scene with the same objects, but one image has specular reflection, the other one has no specular (look at earth).

Rendered scene with specular effect

Rendered scene without specular effect

Conclusion

Raycasting is the first step to a more realistic rendered world. On the other hand it takes longer to render scenes because of the "per pixel" method, but the waiting is worth it, especially for more complex scenes.

If you think in the next issue I'll start with raytracing, you guessed wrong. I have something special in mind for you, but I'll keep that a secret. You have to wait for my next post. But there is a hint I can give you:

We will render professional designed objects.

See you in my next post.

Repository


Tested with:
  • Visual Studio 2013 (Windows 8.1)
  • GNU Compiler 4.9.2 (Ubuntu 12.04)

No comments:

Post a Comment