> Custom hardware would allow us to capture the video before it has been converted from > composite to component form. This is a lossy, imprecise process that is dependent on > the decoding algorithms built into the video capture hardware. For archiving, we'd > greatly prefer to keep it in composite format, and do the decoding in software using > whatever algorithms people can cook up in the future.
MAME's HLSL system can potentially decode an NTSC composite video bitstream to RGB on the GPU, which has the potential of saving a great deal of processing time and working on very low-end hardware that has a shader-capable GPU (e.g. the dreaded Raspberry Pi). I know Ryan's very interested in pursuing this if someone can create such a bitstream from a laserdisc.