> > It seems like DICE was faster than that. > > I overlooked the details in the posts also. I briefly read the posts between Adam > and Juergen from a couple weeks ago.
Agreed, I think you're on the right track (but I don't actually know - I'm not knowledgeable enough to be able to compare DICE's and TTL's sourcecode to see what's different in the implementations, I don't think I am at least... )
I think once the implementation is solid, the optimizing will begin. As gregf said, Juergen's thinking about whether it's possible to split over multiple cores. It would even be humorous to see if a "TTL DRC" would be possible. Or maybe GP-GPU programming lends itself more to discrete simulation. Who knows...