If code size is a real consideration then I would say 'pick your battle'.
Find the bit that is destroying the most precision and then just use fixed point there and then convert it back.
My rule of thumb even with full floating point precision is 1 decimal of lost precision confidence per operation. Not always true but usually close enough.
I would probably 64bit int it (even if it cost a bunch more instructions/memory) shift it and then integer mul/div it then shift it back then drop it back into the float with a round so the rest of the code is ok.
Even that may not be 'ok' depending on how many operations you do. Pretty much for divides think about it as 1 number you cant trust anymore. For multiplies you need 2x the space. You can reduce the error by limiting your inputs. Instead of for example 2 8 bit vals multiplied together can in theory fill a 16 bit value. But by limiting them to 0-15 (4 bits) each you still fit in the 8 bit.