First Render: Xbox360
Ok, here's the Tiger example scene on the 360:
Visually not very impressive, just as the last screen. The point is that the basic Nebula3 render loop is now also running on the 360 (complete with asynchronous resource loading), side by side through exactly the same highlevel code and from the same source-assets as the PC-version.
It took me a bit longer then planned because of the 360's tiled rendering architecture, and because the rendering code is *not* just a straight port from the PC version (you pretty much can do this on the 360, it's just not as optimal as using the more 360-specific APIs).
Because of the tiled rendering I had to do a few basic changes to the CoreGraphics subsystem in order to hide the details from the higher level code:
CoreGraphics now gets hinted by the higher level rendering code what the rendered stuff actually is, for instance, depth-only geometry, solid or transparent geometry, occlusion checks, fullscreen-posteffects etc...
The second change is that RenderTargets have become much smarter. A RenderTarget object isn't just a single rendering surface as in Nebula2, instead it may contain up to 4 color buffers (with multisampling if supported in the specific configuration) and an optional depth/stencil buffer, it has Begin()/End() methods which mark the beginning and end of rendering to the render target (this is where the rendering hints come in, to let the RenderTarget classes perform platform-specific actions before and after rendering).
A render target now also knows how to resolve its content most efficiently into a texture or make it available for presentation. Traditionally, a render target and a texture was the same thing in Direct3D. So you could render to a render target, and when rendering was finished the render target could immediately be used as a texture. This doesn't work anymore with multisampled render targets, those have to be resolved into a non-multisampled texture using the IDirect3DDevice9::StretchRect() method. On the 360 everything is still a little bit different depending on the rendering scenario (720p vs. 1080p vs. MSAA types). So the best thing was to hide all those platform-specifics inside the RenderTarget object itself. A Nebula3 application doesn't have to be aware of all of these details, it just sets the current render target, does some rendering, and either gets a texture from the render target for subsequent rendering, or presents the result directly.
I also started to work on a "FrameShader" system. This is the next (simplified) version of Nebula2's RenderPath system. It's basically a simple XML schema which contains a description of how a frame is exactly rendered. It has 2 main purposes:
Visually not very impressive, just as the last screen. The point is that the basic Nebula3 render loop is now also running on the 360 (complete with asynchronous resource loading), side by side through exactly the same highlevel code and from the same source-assets as the PC-version.
It took me a bit longer then planned because of the 360's tiled rendering architecture, and because the rendering code is *not* just a straight port from the PC version (you pretty much can do this on the 360, it's just not as optimal as using the more 360-specific APIs).
Because of the tiled rendering I had to do a few basic changes to the CoreGraphics subsystem in order to hide the details from the higher level code:
CoreGraphics now gets hinted by the higher level rendering code what the rendered stuff actually is, for instance, depth-only geometry, solid or transparent geometry, occlusion checks, fullscreen-posteffects etc...
The second change is that RenderTargets have become much smarter. A RenderTarget object isn't just a single rendering surface as in Nebula2, instead it may contain up to 4 color buffers (with multisampling if supported in the specific configuration) and an optional depth/stencil buffer, it has Begin()/End() methods which mark the beginning and end of rendering to the render target (this is where the rendering hints come in, to let the RenderTarget classes perform platform-specific actions before and after rendering).
A render target now also knows how to resolve its content most efficiently into a texture or make it available for presentation. Traditionally, a render target and a texture was the same thing in Direct3D. So you could render to a render target, and when rendering was finished the render target could immediately be used as a texture. This doesn't work anymore with multisampled render targets, those have to be resolved into a non-multisampled texture using the IDirect3DDevice9::StretchRect() method. On the 360 everything is still a little bit different depending on the rendering scenario (720p vs. 1080p vs. MSAA types). So the best thing was to hide all those platform-specifics inside the RenderTarget object itself. A Nebula3 application doesn't have to be aware of all of these details, it just sets the current render target, does some rendering, and either gets a texture from the render target for subsequent rendering, or presents the result directly.
I also started to work on a "FrameShader" system. This is the next (simplified) version of Nebula2's RenderPath system. It's basically a simple XML schema which contains a description of how a frame is exactly rendered. It has 2 main purposes:
- grouping render batches into frame passes (e.g. depth pass, solid pass, alpha pass) and thus eliminating redundant per-batch state switches (state which is constant across all objects in apass is set only once)
- easy configuration of offscreen rendering and post effects without having to recompile the application