Friday, 15 April 2011
Wednesday, 6 April 2011
Tuesday, 22 March 2011
Character models
I don't really think of myself to be good at creating characters, infact i think i'm terrible but i decided to have a go and make one. I don't think its too bad so far.
Tuesday, 7 December 2010
Radiosity and Ray-tracing
Radiosity is a rendering technology that realistically simulates the way that light interacts with environments, by more precisely simulating the light in a scene, radiosity offers more benefits over standard lighting.
Radiosity technology produces more accurate photometric simulations of the lighting in a scene. Effects such as indirect light,soft shadows and color bleeding between surfaces produce images that look natural that are not possible with standard rendering. these images give you better, more realistic ideas of how your designs will look under certain lighting conditions.

Photometry is the science of measurement of light, in terms of its perceived brightness to the human eye, it is measured in Lumens, candeles, lux and more.
Local illumination algorithms describe only how individual surfaces reflector transmit light. Given a description of light hitting a surface these mathematical algorithms, in 3DSMAX there called shades, predict the color, intensity and the distribution of the light leaving that surface. in conjunction with a material description, different shaders will determine different things like roughness and if the object is metal or plastic. After finding out how the surface reacts to with the light the next task is to figure out where the light hitting the object originates.
algorithms that take into account the ways in which light is transferred between surfaces in a model are called global algorithms
an algorithm is a mathematical function, a definate list of well defined instructions for completing a task.
the two main algorithms for global illuminination are ray-tracing and radiosity
The Ray-tracing algorithm realises that although there are billions of photons in the air, the photons we need are the ones that enter the eye. the algorithm works backwards by tracing rays from each pixelon the screen to the 3D model, in this way we get only the information needed to construct the image
Radiosity technology produces more accurate photometric simulations of the lighting in a scene. Effects such as indirect light,soft shadows and color bleeding between surfaces produce images that look natural that are not possible with standard rendering. these images give you better, more realistic ideas of how your designs will look under certain lighting conditions.
Photometry is the science of measurement of light, in terms of its perceived brightness to the human eye, it is measured in Lumens, candeles, lux and more.
Local illumination algorithms describe only how individual surfaces reflector transmit light. Given a description of light hitting a surface these mathematical algorithms, in 3DSMAX there called shades, predict the color, intensity and the distribution of the light leaving that surface. in conjunction with a material description, different shaders will determine different things like roughness and if the object is metal or plastic. After finding out how the surface reacts to with the light the next task is to figure out where the light hitting the object originates.
algorithms that take into account the ways in which light is transferred between surfaces in a model are called global algorithms
an algorithm is a mathematical function, a definate list of well defined instructions for completing a task.
the two main algorithms for global illuminination are ray-tracing and radiosity
The Ray-tracing algorithm realises that although there are billions of photons in the air, the photons we need are the ones that enter the eye. the algorithm works backwards by tracing rays from each pixelon the screen to the 3D model, in this way we get only the information needed to construct the image
Rendering Hardware
Rendering Hardware
API is any defined inter-program interface, video acceleration API is a software which provides access to graphics hardware acceleration for video processing. Accelerated processing includes rendering and video decoding aswell as subpicture blending.
VA API was ment to someday replace XvMC which is the unix equilivent of the microsoft windows directX video acceleration. The Motivation for VA API is to enable accelerated video decode at verious entry-points for the standards today which are MPEG-2, MPEG-4 ASP/H.263, MPEG-4 AVC/H.264, and VC-1/VMW3. Extending XvMC was considered but due to its design for MPEG-2 only, it made more sense to design a fresh interface that could fully expose the decoding abilitys of todays GPU’S.
A GPU is a graphics processing unit, they are dedicated graphics rendering devices for a P.C, Workstation or Games Console. Modern GPU’S are extremely efficient at displaying computer graphics.The Higly parellell structure of them makes them more effective than general purpose CPU’s for a range of complex algorithms. A GPU can sit on top of a video card, or it can be integrated Directly into the motherboard . in more than 90% of computers integrated GPU’s are far less powerfull than the add ins.
A shader in the field of Computer graphics is a set of software instructions, which is used by graphic resourses mostly to perform Rendering effects.Shaders are used to llow a 3d Application Designer to program the GPU “programmable pipeline”, which is used alot more than the older “fixed-function pipeline”, allowing more flexibility making the most out of the GPU’s abilitys.
A render engine is a process which generates an image from a model. The main model is a description of 3D objects from a data structure. The image contans information from points such as geometry, viewpoint, texture, lighting, and shading.
A render engine is a process which generates an image from a model. The main model is a description of 3D objects from a data structure. The image contans information from points such as geometry, viewpoint, texture, lighting, and shading.
Silicon graphics are considered to play such a key role in the development of hardware renderers as silicon graphics were the founders of Open GL graphics.
James Clarke is a very well know business man who specialises in computer science. In more recent times he has founded several notiable companies such as Silicon Graphics, Inc., Netscape Communications Corporation, myCFO and Healtheon.
He is one of the most notable founders in computer graphics is because his research into computer graphics led to the development of systems for the fast rendering of computer images.
He is one of the most notable founders in computer graphics is because his research into computer graphics led to the development of systems for the fast rendering of computer images.
OPEN GL
- The Open GL Shader is cross Platform,
- Open GL is very portable and is able to be used in a variety of situations and is very straight forward to use.
- Open GL is unlikely to evolve at a fast rate.
DIRECT X
- DirectX supports a greater set of features.
- DirectX gives programmers a great deal of control over the rendering pipeline if they want it. DirectX 9.0 features programmable pixel and vertex shaders
- Direct X has better support for modern chipset features.
- Direct X also is very straightforward however, this is not cross platform like the open GL, where as the direct X can in many cases only work on the Windows Platform.
- However it has some driver issues.
- DirectX is not portable and probably never will be.
Production Pipeline – Games
Production Pipeline – Games
Concept
The beginning of all games we see is a simple concept or idea. the concept is just a simple idea for what the game can be about. For instance, a simple game concept could be to make a futuristic 3D shoot em’ up game to fit into the currently violent society. It can also be something as simple as making an action/adventure game where you’re controlling a pirate.
The game’s idea can also start as simply wanting to make a follow-up or sequel to an existing title, a game based on an existing non-gaming character, stories or franchises, for example a star wars game, although George Lucas’s permission would be required (see my post on George Lucas) – or a game that’s meant to simulate some real world experience, such as the case with sports, flight, or driving simulations. In these cases, the beginning of the game’s development can simply be the company deciding that it wants to make a game that simulates the real-life sport of motor racing or one that’s based on the television series Lost.
Pre-Production
The next step that needs to be done in the game development process is mostly referred to as the pre-production phase. This is where a pre-production team, which usually includes a unset number of producers, designers, programmers, artists and writers, will work on things such as writing the storyline, creating storyboards, and putting together a design document detailing the game’s goals, level designs, game play mechanics and overall blueprint.
The freedom that the pre-production team has in each of these areas is limited to the type of game being made. When a game’s being created on a completely original idea, the story writers, artists and designers can make whatever they imagine with no limit except for the hardware’s limitations. The story and characters are only limited by the imaginations of the people on the pre-production team.
In instances where the game being developed is based on a licensed franchise or a simulation of a real world event, the freedom is often limited to what’s allowed within the the franchise or real world event in question. If a company is working on a game based on a Pixar license, there’ll often be restrictions with what the characters can do or say or where the storyline can go. There’ll also usually be guidelines that stipulate precisely what the characters in the game must look like.
Likewise, if a simulation of football is being developed the designers have to mimic the real-life rules and regulations of the sport. While new characters, teams, and rules may be added, if it’s an F.A-licensed football simulation being developed it will have to have a foundation based on the real-life players, teams, rules, regulations and look of the F.A.
Next is the storyline (if the game requires one). The storyline is a hugely important process as it defines the main characters, plot, setting and overall theme. It can be as simple as coming up with the names of characters that are entering a racing tournament or it can be much bigger where there are hundreds of words of spoken dialogue like the Grand Theft Auto Games. Of course, if what’s being worked on is a simple simulation of backgammon and use of characters or a plot isn’t being planned, then this step is ignored.
Once the storyline is completed, the next step is to attempt to piece together a storyboard for the game. This is a visual representation of storyline that includes sketches, concept art, and text to explain what happens in each scene of the game. The storyboards are mostly done for the cinematic CG rendered or realtime cut-scenes, but may be used in general game play.
The third aspect of the pre-production phase, which is done alongside the writing of the story and the crafting of the storyboards, is the piecing together of a design document for the game. In addition to including the storyline and storyboards, the design document will also show the designers overall blue print for exactly how the game will be played, what each menu or screen in the game will look like, what the controls for the character or characters are, what the game’s goal is, and the rules for how you win/lose in the game, and maps of the different worlds or levels within the game.
This is where the designers, as well as the software engineers, must decide things such as what happens on screen when a specific button or key. But mostly time is spent on things such as what exactly is in each world, what can and cannot be interacted with and how a NPC (non-player controlled) character reacts to what the player-controlled character does in the game.
The parties involved must also take into consideration the limitations of the platform that the game is being created on and, in the case of consoles, what standards that the hardware manufacturer may require to be followed in order to be approved for release on the system.
Production
After pre-production is complete and the game’s overall blueprint has been finalized, the development of the game enters the production phase and now larger group of producers, designers, artists and programmers are brought into the mix.
The producer(s) will work with the design, art and programming teams to make sure everyone is working together. The main job for them is to create schedules to be followed by the engineers and artists, making sure the schedules are stuck to, and to ensure that the goals of the design are followed throughout the development of the game. Those in production will also work with dealing with any licenses that the game uses and in making sure the company’s marketing department knows what it needs to know about the title.
The artists during the production phase will be working on building all of the animations and art which you’ll see in the game. Programs such as Maya and 3D Studio Max will often be used to model all of the game’s environments, objects, characters and menus – essentially everything. The art team will take care of creating all of the texture maps that are added to the 3D objects to give them more life.
At this time, the programmers are working on coding the game’s library, engine, and artificial intelligence (AI). The library is usually something that has already been created for the company for use with all its games and is constantly updated and tweaked in order to meet any new goals or expectations for the development of newer titles. Many times the library team will be required to write its own custom programming code.
Post-Production
The final stage of a game’s development is the post-production stage. This begins when the game is “feature complete” in that all of the code has been written and art has been completed but there may be problems. This is when an alpha version of the game is created and is supplied to the game’s test department to find bugs and major flaws in the game that need to be changed whether by the artists or programmers.
Once all of the bugs and major flaws are identified and addressed, a beta version of the game is then produced and once again sent to the test department to be tested. This is where the hardcore testing is done and every single bug regardless of how major or minor is documented and attempted to be fixed.
All that’s left to do once the game is approved by the console manufacturer or just finished by the developer in the case of PC games, is for the game to be manufactured and then distributed to stores where you can go out and buy them.
3D Pipeline - Movies
Production Pipeline
Storyboard
Storyboards are like a hand drawn version of the movie, the storyboards serve as a blue print of movement and dialogue. Each storyboard artist is given a script page and a map of the characters emotional changes which need to be seen through actions, using these as guidelines, the artist then draws out the scene.
After the storyboards are done, they are then digitally scanned and joined to create a story reel; this is like a flip book in a way because it lets you see all the drawings flow.
According to DreamWorks, this process can take up to 18 monthshttp://www.dreamworksanimation.com/dwa/opencms/inside/how_we_make_movies/development/3.html
Visual development
After the story reel is complete artists get together to design everything that will be in the film, from the major characters, to the smallest props.
Thousands of blueprints, models, paintings and drawings will be created and eventually, a digital world and characters will have been created.
Voice over’s
Now that the character designs are chosen, its time to do voice over’s. voice overs come first so that modellers can make the characters look like the speaking later on and is much easier
Modelling
From the initial designs, modellers will construct a digital 3-D model that will be used for planning and animation.
Rigging
The modellers start with a wire frame sculpture that is called called an armature that breaks down the design into workable geometry and allows them to “rig” the figure, which will give the animator the ability to move the 3-D figure in whatever way is necessary to get the articulation they want.
Basic Surfaces
Once we’ve set up the armature, we can begin to add basic surfaces. It is this simplified “puppet in a box” or digital marionette that is used in the next step.
Layout
Layout artists use rough “stand-in” shapes to block out the movement of the character in the scene. This rough layout or animatic is the blueprint from which we determine camera movement, character placement, spacing, lighting, geography and scene timing. The animatic maps out the entire movie, giving us a digital picture of each scene before we actually begin the character animation.
Character animation
Once the sequence is working well in layout, the animators start bringing the characters to life in the computer. They articulate the thousands of controls that were created during the character-rigging phase to bring each character to life and to synchronize them to the voice performances. Now the characters really look like themselves, but not quite. Remember, this is just the animation; the scene isn’t quite finished yet.
Effects
After the camera moves have been set and the characters have been animated, the next steps are effects and lighting.In a live-action film, it’s easy to photograph things like leaves blowing in the wind, waves at the beach or even footprints in the sand. In computer animation, these simple things are all designed and animated by the effects artists. In other words, if it’s not acting, but it moves, it’s an effect.This process can take up too 4 years
Subscribe to:
Posts (Atom)