Following last post on modelling, I’ll be talking about UV mapping, Texturing and shaders, rigging, and animation.
UV mapping is a vital step in bringing the model to life, it allows for the creator to map the geometry they created to later easily place images on the triangles.
In the technical sense, what happens is the 3D points in space used to create the triangles are in a list of points. Somewhere else in memory is a list of 2D points that represent locations on an image. These 2D points are linked to the 3D points in some way and when the computer needs to draw the object on screen, it does some fancy Maths and finds what pixel on the image the triangles are meant to be showing. So by setting up the 2D points, the creator can then later draw on the image and the appropriate pixels will be drawn.
Texturing is the step directly after UV mapping and is the sole reason UV mapping is needed. Through this step, all detail that isn’t geometry is created. If texturing was never completed, then we’d only have solid colour graphics.
When texturing, the creator needs to be able to convey the detail necessary on the triangles by making sure the pixels are where they need to be, through a uv map, they can see where certain faces are in the image they are editing.
Shaders are fun, it’s where the desired art style can fall apart if not done correctly. They are mathematical functions that take all the input needed and manipulate the pixels to result in the intended visuals.
An example of how shaders can work is a human body. Human skin has something called subsurface scattering (light shinning through the skin, showing the blood underneath), which if done correctly makes a human model appear realistic, though the artist may choose to go for a more artistic art style, such as cel-shading.
Rigging is a step that doesn’t need uv mapping and texturing to be completed, but generally for workflow it’s done after those 2.
Rigging is the process of attaching vertices (the points in space that are connected to be triangles) to “bones” which are just points in space with rotation and scaling. These bones, when moved, move the vertices that are attached to them, deforming the bones directly translates to the attached vertices moving appropriately. When this is done, the animation can move and rotate these bones to have movement where only a few numbers need to be stored to know what to do.
Animation (3D) is the process of manipulating a mesh to give the desired result. This result can be a facial expression, a person talking, walking, running, a monster attacking, being hit, or even the actions of a character through an entire 3D animated film. The applications for this is generally for displaying what is needed of a character.
Animating is usually done with key frames, all bones are arranged appropriately at certain intervals which aren’t always at each frame that is shown to the viewer, the frames between each key frame are calculated by moving mathematically between each one by a percentage that is calculated on the fly.