From what I've seen, most other emulators are doing it wrong too. With an interlaced signal, the raster beam should be striking the phosphor surface between the "lines" from the previous field. The beam has a "height" though, with a non-linear brightness from top to bottom, and unless there's really large scanlines on the monitor, the beam positions probably have significant overlap. Simply stacking rows of pixels from alternating fields isn't really that correct in this case either. In my opinion, we should probably be rendering at 5x the field output vertical resolution, with the beam center at full opacity in the middle row, and lower opacity at the two rows above and below the beam position, then effectively merge the two fields together at a .5 line height offset using alpha blending, to properly blend the fields together as your eyes would have on the real screen. Obviously there needs to be this kind of blending in the horizontal axis, as a horizontal line is simply an analog waveform and there's rise and fall time even with a supposedly instant digital transition from black to white for example, but the physical characteristics of an interlaced phosphor display mean there's also supposed to be significant blur in the vertical axis too, and I don't think that's as widely understood. With the wide availability of 4K displays now, there's actually an opportunity to do this properly, as it's more realistic to be able to do 10x upsampled output resolution to account for beam width. Scanlines can be properly generated too under this kind of approach.
Edited by Nemesis1207 (06/27/18 02:32 AM)
|