Wednesday, July 8, 2015

2D Texture Mapping

This article describes how we can add textures on existing surfaces. That way we can make objects more realistic and fancy. This method is called Texture Mapping

Loading images to use as textures

Until now we were only able to save rendered images. It is better to use some pictures (of yourself, your pet, friends etc.) as textures. Anymaps may have a simple structure, but we still need a parsing tool to get the informations we need.

Anymaps can be both in text and binary form, therefore we need 2 different parsers. The header is always text, but we need to look for "#". This symbolizes a comment until newline "\n" approaches, similar to "//" and "/*...*/" in C, C++ and Java.

My idea for an ideal PNM parser looks like this:

  1. get file content as string
  2. erase first characters from "#" to next "\n" (header only allows 1 #)
  3. put string into a stringstream (#include <sstream>)
  4. get header information: sstream >> magic_number >> width >> height
  5. if magic_number is neither "P1" nor "P4" (PBM), stringstream >> color depth
  6. read color codes depending on ASCII (P1, P2, P3) or BINARY (P4, P5, P6) format
Stringstream ignores whitespace and newline when using input operator ">>". It also casts them into strings and numbers.

Setting Texture Anchor Points

The easiest method which works for most 2D graphics is by assigning texture anchor points from 0 to 1. We do that by getting MIN and MAX for each point of an object and calculate the ratio for each point in our Object2D.



These u and v values can be saved as Vector2D in a Surface2D which got extended by a reference of a Texture object and 3 anchor points. It is important that the position of these points are the same as Surface points. So don't mix them up or our Texture might look strange when rendered.

Textures

In our framework we will connect a texture with a picture. Important are the differences between object coordinates, texture coordinates and pixel coordinates. Pixel coordinates are limited to a picture's size while world coordinates are infinite. Textures can be used on multiple objects, but each object, or rather surface, can use only one texture at a time.

The easiest object to put a texture on is a rectangle, because it already looks like a frame to a picture. But remember, we use 2 triangles for 1 rectangle (triangles are easiest surfaces in computer graphics). That means we need to define texture coordinates for some points in a rectangle twice.

The first thing we do is create a Texture object and assign an Image to it. With the coordinates u and v containing values between 0 and 1 we get pixel coordinates by calculating:



But using a simple multiplication with width and height retuns floating point numbers. The only correct pixel value can be found at its center, but always hitting this point is quite impossible. We need an interpolation between 4 pixels

In aboves picture we see a 3x3 image, visualized with a center dot for each pixel. From Surface2D we received the following values for u and v which results in the following pixel coordinates (marked with a blue dot):





Our priority pixel is ImageColor(1,2), the center would be at 1.5 and 2.5. Calculating the difference between pixel coordinates and priority coordinates we get for x +0.2 and for y -0.3. This means that we also need the following pixel colors and interpolate first for each row and for each color, placing x and y as the top left coordinate:




The result is a mix of 4 pixels calculating an average color.

Getting Texture coordinates from Screen coordinates

The usual render routine starts with transforming object coordinates into world coordinates and from them to screen coordinates. Only the world coordinates for each Surface2D gets transformed, the texture coordinates stay untouched. But we can only get x and y from render algorithms like rasterization. To get u and v we start with the following formula for calculating triangle coordinates:



There are 2 ways on getting scalar values s and t.

Calculating s and t directly

We calculate s and t directly by solving above equation after s/t. To save some space we use v1 and v2 which is equal to (P1-P0) and (P2-P0). Normally "P" is unknown, but in that case we know the point and can use v as (P-P0).





The point it inside of the triangle if the following conditions are fullfilled:

Using Barycentric coordinates

Another way to get s and t is by using Barycentric coordinates. To understand the connection we need to change the formula to calculate a point of a triangle:



We could say that (1-s-t) can be called r, but barycentric coordinates work with greek letters. So the formula is like



With the condition:



A point P forms 3 subtriangles with each corner point of the original triangle. α, β and γ define the ratio of the subtriangle's area on the opposite site of the belonging point compared to the area of the original triangle.



This means we can calculate barycentric constants like this:



If you are sure that point P is exactly on the triangle and not a bit above/below it, you can also calculate α = 1 - β - γ. But this certainty only exists in 2D, not in 3D.

After that we can use β and γ as constants s and t from our triangle formula and use that in our texture formula which is equal to that of a triangle.

Framework

I made almost no changes to the framework. The only classes with major changes are
I skipped the part where you can change texture coordinates on objects manually, I leave that to your skills ;). I also added 2 example pictures (found in trip advisors) as PPM in the repository. With my settings in main.cpp I get the following output

Please take a look at the two rectangles showing a picture of Eiffel tower each. The left one got linked with a texture after tranformation, the right one before transformation.

Conclusion

We learned how to load a Portable Anymap, put it into a texture and link a texture to a 2D object so that it can automatically set texture anchor points to each point. This was the final chapter about 2D rendering, there is hardly anything to be added. You should now be able to use textures and create your own 2D objects (like triangles or even polygons).

The next issue is finally the start of 3D rendering. We will need Object3D classes put into a Scene3D. We will extend rasterization algorithm to also consider pixel depths, calling it a depth buffer.

See you in the next issue.

Repository


Tested with:
  • Visual Studio 2013 (Windows 8.1)
  • GNU Compiler 4.9.2 (Ubuntu 12.04)

No comments:

Post a Comment