> It is the range passed from the OS layer to MAME which then gets converted to the > port value needed by the game driver. Which can be anything. 1bit, 2bits, 8bits, > 16bits, etc > > read the input .h files its described at the top of one of them.
Ok, thank you.
Would it not be more convenient and more "object oriented" design if each driver converted the value for itself, since it already has to know what it needs and the data is defined in the driver part of the code anyway, right? In that case emulator would not need to care which driver is running, to MAME it would all be the same. -- Anyway, this is type of simplification and reduction of code I go for in my re-writes, where basically the single purpose and goal is to make MAME as small and as fast as possible, while still keeping it portable. I'm porting it to mobile platforms by the way, that's why.
As for 17 bits precision...
0 to 65535 fits in 16 bits, two bytes, so you need four bytes to store the whole range, the data type "long" or "doubleword", that is 32 bits.
Now, to fit that one extra bit on each side, which insignificantly increases resolution, you have to use "long long", that is 64 bits or 8 bytes. Four more bytes just to store two extra bits.
To know whether it affects anything, or not, we have to do it the other way too, so we have something to compare. Input handling is the kind of thing that is processed at least once per frame, so just passing 8 bytes instead of 4 as a function argument can very easily impact performance, and then take into account all the operations you perform on those variables and I bet you will gain at least 3-4 frames if everything done with 32(16) instead of 64(32) bit variables. Not to mention hardware platforms that would have extra trouble processing and transferring those large chunks simply due to processor architecture and width of the bus, who knows?
All that aside, the motive for doing so, and even complicate the code for the sake of it, as you said, seem completely out of place. Why do you think "long long" would be more portable than "doubleworld" data type? If that even matters with proper compiler, would it not rather be the opposite?
It's not up to today's code to try and be modern for tomorrow's hardware, it's up to hardware to be backwards compatible with yesterday's software. Stick with ints and floats and I guarantee your code will compile and work forever, on whatever hardware future brings... unless humanity goes completely crazy and we see the day when people eat their soup with a shovel instead of spoon.
|