Tuesday 7 December 2010

Radiosity and Ray-tracing

Radiosity is a rendering technology that realistically simulates the way that light interacts with environments, by more precisely simulating the light in a scene, radiosity offers more benefits over standard lighting.
Radiosity technology produces more accurate photometric simulations of the lighting in a scene. Effects such as indirect light,soft shadows and color bleeding between surfaces produce images that look natural that are not possible with standard rendering. these images give you better, more realistic ideas of how your designs will look under certain lighting conditions.
Radiosity Comparison
Photometry is the science of measurement of light, in terms of its perceived brightness to the human eye, it is measured in Lumens, candeles, lux and more.
Local illumination algorithms describe only how individual surfaces reflector transmit light. Given a description of light hitting a surface these mathematical algorithms, in 3DSMAX there called shades, predict the color, intensity and the distribution of the light leaving that surface. in conjunction with a material description, different shaders will determine different things like roughness and if the object is metal or plastic. After finding out how the surface reacts to with the light the next task is to figure out where the light hitting the object originates.
algorithms that take into account the ways in which light is transferred between surfaces in a model are called global algorithms

 an algorithm is a mathematical function, a definate list of well defined instructions for completing a task.
the two main algorithms for global illuminination are ray-tracing and radiosity
The Ray-tracing algorithm realises that although there are billions of photons in the air, the photons we need are the ones that enter the eye. the algorithm works backwards by tracing rays from each pixelon the screen to the 3D model, in this way we get only the information needed to construct the image 

Rendering Hardware

Rendering Hardware

API is any defined inter-program interface, video acceleration API is a software which provides access to graphics hardware acceleration for video processing. Accelerated processing includes rendering and video decoding aswell as subpicture blending.
VA API was ment to someday replace XvMC which is the unix equilivent of the microsoft windows directX video acceleration. The Motivation for VA API is to enable accelerated video decode at verious entry-points for the standards today which are MPEG-2, MPEG-4 ASP/H.263, MPEG-4 AVC/H.264, and VC-1/VMW3. Extending XvMC was considered but due to its design for MPEG-2 only, it made more sense to design a fresh interface that could fully expose the decoding abilitys of todays GPU’S.
A GPU is a graphics processing unit, they are dedicated graphics rendering devices for a P.C, Workstation or Games Console. Modern GPU’S are extremely efficient at displaying computer graphics.The Higly parellell structure of them makes them more effective than general purpose CPU’s for a range of complex algorithms. A GPU can sit on top of a video card, or it can be integrated Directly into the motherboard . in more than 90% of computers integrated GPU’s are far less powerfull than the add ins.
GPU under heatsink
Picture courtesy of Wikipedia
A shader in the field of Computer graphics is a set of software instructions, which is used by graphic resourses mostly to perform Rendering effects.Shaders are used to llow a 3d Application Designer to program the GPU “programmable pipeline”, which is used alot more than the older “fixed-function pipeline”, allowing more flexibility making the most out of the GPU’s abilitys.
A render engine is a process which generates an image from a model.  The main model is a description of 3D objects from a data structure. The image contans information from points such as geometry, viewpoint, texture, lighting, and shading.
Silicon graphics are considered to play such a key role in the development of hardware renderers as silicon graphics were the founders of Open GL graphics.
James Clarke is a very well know business man who specialises in computer science. In more recent times he has founded several notiable companies such as Silicon Graphics, Inc., Netscape Communications Corporation, myCFO and Healtheon.
He is one of the most notable founders in computer graphics is because his research into computer graphics led to the development of systems for the fast rendering of computer images.
OPEN GL
  • The Open GL Shader is cross Platform,
  • Open GL is very portable and is able to be used in a variety of situations and is very straight forward to use.
  • Open GL is unlikely to evolve at a fast rate.
DIRECT X
  • DirectX supports a greater set of features.
  • DirectX gives programmers a great deal of control over the rendering pipeline if they want it.  DirectX 9.0 features programmable pixel and vertex shaders
  • Direct X has better support for modern chipset features.
  • Direct X also is very straightforward however, this is not cross platform like the open GL, where as the direct X can in many cases only work on the Windows Platform.
  • However it has some driver issues.
  • DirectX is not portable and probably never will be.

Production Pipeline – Games

Production Pipeline – Games

Concept
The beginning of all games we see is a simple concept or idea. the concept is just a simple idea for what the game can be about. For instance, a simple game concept could be to make a futuristic 3D shoot em’ up game to fit into the currently violent society. It can also be something as simple as making an action/adventure game where you’re controlling a pirate.
The game’s idea can also start as simply wanting to make a follow-up or sequel to an existing title, a game based on an existing non-gaming character, stories or franchises, for example a star wars game, although George Lucas’s permission would be required (see my post on George Lucas) – or a game that’s meant to simulate some real world experience, such as the case with sports, flight, or driving simulations. In these cases, the beginning of the game’s development can simply be the company deciding that it wants to make a game that simulates the real-life sport of motor racing or one that’s based on the television series Lost.
Pre-Production
The next step that needs to be done in the game development process is mostly referred to as the pre-production phase. This is where a pre-production team, which usually includes a unset number of producers, designers, programmers, artists and writers, will work on things such as writing the storyline, creating storyboards, and putting together a design document detailing the game’s goals, level designs, game play mechanics and overall blueprint.
The freedom that the pre-production team has in each of these areas is limited to the type of game being made. When a game’s being created on a completely original idea, the story writers, artists and designers can make whatever they imagine with no limit except for the hardware’s limitations. The story and characters are only limited by the imaginations of the people on the pre-production team.
In instances where the game being developed is based on a licensed franchise or a simulation of a real world event, the freedom is often limited to what’s allowed within the the franchise or real world event in question. If a company is working on a game based on a Pixar license, there’ll often be restrictions with what the characters can do or say or where the storyline can go. There’ll also usually be guidelines that stipulate precisely what the characters in the game must look like.
Likewise, if a simulation of football is being developed the designers have to mimic the real-life rules and regulations of the sport. While new characters, teams, and rules may be added, if it’s an F.A-licensed football simulation being developed it will have to have a foundation based on the real-life players, teams, rules, regulations and look of the F.A.
Next is the storyline (if the game requires one). The storyline is a hugely important process as it defines the main characters, plot, setting and overall theme. It can be as simple as coming up with the names of characters that are entering a racing tournament or it can be much bigger where there are hundreds of words of spoken dialogue like the Grand Theft Auto Games. Of course, if what’s being worked on is a simple simulation of backgammon and use of characters or a plot isn’t being planned, then this step is ignored.
Once the storyline is completed, the next step is to attempt to piece together a storyboard for the game. This is a visual representation of storyline that includes sketches, concept art, and text to explain what happens in each scene of the game. The storyboards are mostly done for the cinematic CG rendered or realtime cut-scenes, but may be used in general game play.
The third aspect of the pre-production phase, which is done alongside the writing of the story and the crafting of the storyboards, is the piecing together of a design document for the game. In addition to including the storyline and storyboards, the design document will also show the designers overall blue print for exactly how the game will be played, what each menu or screen in the game will look like, what the controls for the character or characters are, what the game’s goal is, and the rules for how you win/lose in the game, and maps of the different worlds or levels within the game.
This is where the designers, as well as the software engineers, must decide things such as what happens on screen when a specific button or key. But mostly time is spent on things such as what exactly is in each world, what can and cannot be interacted with and how a NPC (non-player controlled) character reacts to what the player-controlled character does in the game.
The parties involved must also take into consideration the limitations of the platform that the game is being created on and, in the case of consoles, what standards that the hardware manufacturer may require to be followed in order to be approved for release on the system.
Production
After pre-production is complete and the game’s overall blueprint has been finalized, the development of the game enters the production phase and now larger group of producers, designers, artists and programmers are brought into the mix.
The producer(s) will work with the design, art and programming teams to make sure everyone is working together. The main job for them is to create schedules to be followed by the engineers and artists, making sure the schedules are stuck to, and to ensure that the goals of the design are followed throughout the development of the game. Those in production will also work with dealing with any licenses that the game uses and in making sure the company’s marketing department knows what it needs to know about the title.
The artists during the production phase will be working on building all of the animations and art which you’ll see in the game. Programs such as Maya and 3D Studio Max will often be used to model all of the game’s environments, objects, characters and menus – essentially everything. The art team will take care of creating all of the texture maps that are added to the 3D objects to give them more life.
At this time, the programmers are working on coding the game’s library, engine, and artificial intelligence (AI). The library is usually something that has already been created for the company for use with all its games and is constantly updated and tweaked in order to meet any new goals or expectations for the development of newer titles. Many times the library team will be required to write its own custom programming code.
Post-Production
The final stage of a game’s development is the post-production stage. This begins when the game is “feature complete” in that all of the code has been written and art has been completed but there may be problems. This is when an alpha version of the game is created and is supplied to the game’s test department to find bugs and major flaws in the game that need to be changed whether by the artists or programmers.
Once all of the bugs and major flaws are identified and addressed, a beta version of the game is then produced and once again sent to the test department to be tested. This is where the hardcore testing is done and every single bug regardless of how major or minor is documented and attempted to be fixed.
All that’s left to do once the game is approved by the console manufacturer or just finished by the developer in the case of PC games, is for the game to be manufactured and then distributed to stores where you can go out and buy them.

3D Pipeline - Movies

Production Pipeline

Storyboard
Storyboards are like a hand drawn version of the movie, the storyboards serve as a blue print of movement and dialogue. Each storyboard artist is given a script page and a map of the characters emotional changes which need to be seen through actions, using these as guidelines, the artist then draws out the scene.

After the storyboards are done, they are then digitally scanned and joined to create a story reel; this is like a flip book in a way because it lets you see all the drawings flow.
According to DreamWorks, this process can take up to 18 monthshttp://www.dreamworksanimation.com/dwa/opencms/inside/how_we_make_movies/development/3.html 
Visual development
After the story reel is complete artists get together to design everything that will be in the film, from the major characters, to the smallest props.
Thousands of blueprints, models, paintings and drawings will be created and eventually, a digital world and characters will have been created.
 Voice over’s
Now that the character designs are chosen, its time to do voice over’s. voice overs come first so that modellers can make the characters look like the speaking later on and is much easier
Modelling

From the initial designs, modellers will construct a digital 3-D model that will be used for planning and animation.

Rigging

The modellers start with a wire frame sculpture that is called called an armature that breaks down the design into workable geometry and allows them to “rig” the figure, which will give the animator the ability to move the 3-D figure in whatever way is necessary to get the articulation they want.

Basic Surfaces

Once we’ve set up the armature, we can begin to add basic surfaces. It is this simplified “puppet in a box” or digital marionette that is used in the next step.

Layout

Layout artists use rough “stand-in” shapes to block out the movement of the character in the scene. This rough layout or animatic is the blueprint from which we determine camera movement, character placement, spacing, lighting, geography and scene timing. The animatic maps out the entire movie, giving us a digital picture of each scene before we actually begin the character animation.

Character animation
Once the sequence is working well in layout, the animators start bringing the characters to life in the computer. They articulate the thousands of controls that were created during the character-rigging phase to bring each character to life and to synchronize them to the voice performances. Now the characters really look like themselves, but not quite. Remember, this is just the animation; the scene isn’t quite finished yet.

Effects

After the camera moves have been set and the characters have been animated, the next steps are effects and lighting.In a live-action film, it’s easy to photograph things like leaves blowing in the wind, waves at the beach or even footprints in the sand. In computer animation, these simple things are all designed and animated by the effects artists. In other words, if it’s not acting, but it moves, it’s an effect.This process can take up too 4 years 

George Lucas 3D Pioneer

George Walton Lucas, Jr, born May 14, 1944 is a four-time Academy Award nominated American film director, producer, and screenwriter famous for his epic Star Wars saga and Indiana Jones films although the Indiana Jones files are collaboration with his friend Steven Spielberg. He is one of American film industry’s most financially successful independent directors and producers.
Lucas co-founded the studio American Zoetrope with Coppola hoping to create a liberating environment for filmmakers. From the financial success of his films American Graffiti (1973) and Star Wars (1977), Lucas was able to set up his own studio, LucasFilm. Skywalker Sound and Industrial Light and Magic, the sound and visual effects subdivisions of LucasFilm, respectively, have become among the most respected firms in their fields. LucasFilm Games, later renamed to Lucas Arts, is highly regarded in the gaming industry.
George lucas probably wasnt the first to use 3d models with chroma screen, but he did use the technique in his highly successfull star wars films, he had highly detailed 3D models which would be rendered on a blue/green background, he would then take this footage and key out the green/blue and put it on a real time background. He also made fully 3d Environments that can be seen in the footage below:
here you can see the use of chromascreen in the cockpit and the use of 3D models during the flight. he would also have real models built for some shots.
Lucas’s company LucasFilm has been a leader in developing new film technology in special effects, sound, and computer animation, and because of their expertise its subsidiaries often help produce non-Lucas Film pictures. Lucas film is set to move away from films and more into TV, due to rising budgets.
The following is a list of Current LucasFilm Subsidiaries.
  • Lucas Digital
    • Skywalker Sound – post production sound editing
    • Industrial Light & Magic – special effects
  • Lucas Licensing - licensing and merchandising
    • Lucas Learning – educational materials
    • Lucas Books – book publishing
  • LucasArts – video and computer games
  • LucasFilm Animation - animation
  • Lucas Online - websites
In the four catogories made by John W. Gardner which can be found Here I think that George Lucas fits in to the category of “Adaptors” I think this because He didnt invent the use of 3D models on a Chroma scrren, but he did take the idea and refine it and made it more realistic.

Cartesian Co-ordinate System

To create the illusion of working in 3D space, software packages use the Cartesian Co-ordinate System.
This system was developed in 1637 by the French philosopher and mathematician Rene Descartes, originally in an effort to merge algebra and Euclidean geometry. His work has played an important role in the development of analytic geometry, calculus and cartography. The two axes that commonly define the 2-Dimensional Cartesian system are the X and the Y axes. The point where the X and the Y axis meet is called the Origin.
Early in the 19th century, a third dimension of measurement was added; this axis is called the depth axis. This axis runs at right angles to the xy plane and also extends forever in both directions. This third axis is important for 3D work as it enables us to locate any point in three-dimensional space.
z axisaxes
3Ds Max has Viewports, three of these viewports (by default) only show 2 Axis at any given time, these views are called Orthographic views, these are important as they let us see our models in 2-Dimentional views. 
viewports.jpg

3D Development Software

3D development software is software used to develop 3d imagery. There are several 3D software packages available, some are listed below:-
3D Studio Max
Animation Master
Blender
Cinema 4D
LightWave
Maya
MilkShape 3D
Modo
Softimage|XSI
Silo
trueSpace
ZBrush
Pro/ENGINEER
SolidWorks
Catia
Some free modelers available via the Internet include:
Anim8or
Art of Illusion
AutoQ3D
Blender
Quake 2 Modeler
Google SketchUp
TopMod
Wings 3D
Zanoza Modeler
I will focus on 3DSMAX, MAYA and AutoCad.
3DSMAX is 3D modelling software developed by autodesk, currently it is in its 10th version called 3DS MAX 2008.
It is mostly used by video game developers, TV commercial studios and Architectural Studios. It is also used for Movie Effects.
3dsmax 2008living room rendered3Ds Max Astin MartinGun 3dsmax
Above: Interface of 3Ds Max 2008, rendered model of living room, rendered model of car and rendered model of a gun
Maya is a 3D modelling Software originally developed by  Alias Systems Corporation, but now owned by Autodesk. Maya is mainly used in the Film and TV Industry. Maya Models were used in several films including  Jurassic Park, The Abyss andTerminator 2: Judgement Day. The 3 different tolls that make up Maya are Alias Studio for modeling, Softimage for animation, and PhotoRealistic RenderMan for rendering
dinosaurThis T-Rex was made by using the 3 tools that make up Maya For the film Jurassic park
abyssThis Water person Was made in by using the 3 tools that make up Maya for the film: The Abyss
terminator t-1000The Shape shifting T-1000 From the film Terminator 2:Judgement Day was made by using the 3 tools that make up Maya.
Maya and 3DSMAX have some comparisons as they are both used in Games Design, but Maya is best suited for the Film industry.
AutoCAD is 2d and 3d modelling software It is mainly used for architectural modelling, and for modelling things such as Public spaces and plumbing pipelines.
autocadrendering
AutoCAD is generally not used for making 3d Models, it is aimed at a different market than that of Maya and 3DSMAX

History of 3D graphics

Computer graphics formally began in 1963 with the work of Sutherland.
In his classic thesis, he showed that a computer could be used for interactive design of line drawings on a simple CRT display (cathode-ray tube display) and a few auxiliary input controls. Other people had already connected CRTs to computers in the 50’s to generate very simple output displays however it was not until Sutherland developed his system for man-machine interactive picture generation that people became aware of computer graphics full potential.

The realization of the potential however was slow to develop. There were three major barriers. The first major barrier was the then high cost of computing. It was quickly discovered that computer graphics, especially if it were to be interactive would be beyond the computers capacity in both processing and memory size. During the sixties, the cost of meeting these demands could only be justified for research purposes in a few universities and large industrial research labs.

The second barrier was a lack of understanding of the intricacies of picture generating software that would be needed for an effective computer graphics system. It was learned that someone had to develop a data structure that would mimic the often barely realised but visually obvious relationships inherent in a 2D picture.
Algorithms for shading, scan conversion and hidden-line removal were needed and were more complex than first imagined. Even a task as simple as drawing a line on a digitally-oriented display turned out to require complex algorithms

Finally the third barrier was the complexity of both system software and application software was grossly underestimated, many of the early graphics achievements were in fact mere ‘toys’ – impressive but inadequate.

Fortunately time favoured computer graphics. The cost of computing fell year after year. Operating systems were improved and our ability to cope with complex software. Impressive progress was made in the development of algorithms for generating pictures.

Timeline
1960s
  • Initial experimentation with 3D graphics.
  • Key Figures Charles Csuri and John Whitney Sr.
1970s
  • Many animation and rendering algorithms used today were developed in the 1970’s
  • Image and bump textures developed
  • Hand and face improvements
  • First use of 3D CGI in movies
  • Hand and face animation improvements
  • First use of 3D CGI in movies.
1980s
  • Reytracing rendering developed ( raytracing is a CG algorithm used to calculate reflections and refractions of light)
  • Video became most common output method for animation
  • ‘TRON’ was the first movie with more than 20 mins of digital graphics (1982)
  • The mid to late 80’s saw a huge growth in the use of digital graphics in movies and advertising.
  • Luxo Jr. (pixar, 1985) was the first computer-generated animation to be nominated for an academy award
  • ‘tin toy’ (pixar, 1988) was first to win an academy award
  • Particle system’s animation developed
1990’s and 2000s
  • Continued growth and use of computer generated graphics in movies and advertising and scientific visualisations
  • Notable success of 3D CG imagery in ‘Harry potter’ and ‘the lord of the rings’ feature films
  • Rapid advances in modelling, animating and rendering
  • Feature length computer generated movies: toy story, a bugs life

Lego batman review


Hero or Villain?, Why not Both?
Batman dishes out justice to those deemed evil although he’s not morally perfect himself, so  you wouldn’t think he’d be the best choice for the switch over to the Lego universe but  Surprisingly, Lego Batmanholds up quite well in a family-friendly way.

The best thing about Lego Batman is that the script is not tied into any comic book or movie storyline. The developer, Traveller’s Tales, created there own concept for this game in which all the villains have broken out of Arkham asylum at the same time and are now roaming the streets  of Gotham city concocting acts of super-villainy to unleash against the people of Gotham.

Its a ridiculously simple storyline, but thats good because It leads toLego Batman‘s best mechanic, theres a huge variety of characters is available to use, and this game will stretch beyond household names like the Penguin and The Joker and allow you to experience fighting and playing as lesser known villains such as Hush, Killer Croc and The Mad Hatter. What sets this title apart from the others in the Lego series is that the game features two interlocked storylines.

 While you can foil crime with Batman and Robin, you can then go back and relive the episode through the villains perspective, letting you control a huge array of baddies, each with their own unique powers. From the penguins penguin bomb to scarecrows scare gas, each bad guy is as fun to command as batman and robin, and you’ll likely find yourself loving the villains more than the heroes.

It’s not that Batman and Robin are lazy though, but they have to solve problems in a different way. While the baddies may be able to get across an acid lake simply by walking across it, the hero’s have no superpowers of their own, so they must rely on suits to get the job done. Batman may get around shiny objects by using his demolition suit to blow open a new path, or Robin could use his magnetic suit to climb a metal wall in order to flip a previously unreachable switch. Both have a variety of suits to choose from in order to get the job done, and you’re always given the right set of tools in order to accomplish any task. This is an all round family game however it does have one main problem which is the camera.