> So then I suppose I could ask, when did frame buffers come back into fashion? But I > suspect the answer to that would be around when 'true 3D' became popular, as frame > buffers are a requirement in most kinds of z-buffering based 3d rendering techniques.
Williams always used a framebuffer and blitter or GPU combo, from Defender through when they went out of the arcade business. In the case of the 340x0 games, the blitter was special-function hardware in the CPU itself, but they never did raw software drawing.
The classic tradeoff was that framebuffers are more flexible, while tilemaps/sprites let you achieve massive amounts of animation with minimal CPU power. Space Invaders visibly speeds up as you kill the aliens and reduce the draw load on the CPU.
The cutoff, as you'd expect, was polygonal 3D.
Sega Saturn was a hybrid system: the VDP1 sprite/polygon chip renders into a framebuffer, like the PSX and N64 GPUs do, but the framebuffer is then mixed with the tilemap layers from VDP2. This is the root cause of a lot of its limitations, since all metadata about the sprite/polygon pixels is lost once they're written to the framebuffer. On the plus side, since the tilemap layers could scale and rotate, it allowed developers to take a lot of load off of the tragically slow sprite/polygon chip without giving up on 3D.
By contrast, PlayStation and N64 both used a GPU/framebuffer setup, and so did all succeeding consoles, including the Dreamcast. Tilemaps stuck around a little longer in arcades because they were cheap and made it easy to get the HUD/text layer going quickly. Atari's STUN Runner style hardware, Sega's Model 1/2/3 and Namco's System 21/22/23 all did this to at least some degree, and you could program if each tilemap layer appeared in front or behind the 3D framebuffer on most of those systems.
|