- Home /
The question is answered, right answer was accepted
Floats and int performance difference?
Is there a difference when using floats and or ints.
Like say for calculating the distance between objects or the rotation speed, is it better to use ints or floats. Will it make a difference to CPU?
I am asking because I have recently dug up an old code(tutorial) that uses ints for things like rotationSpeed and maxSpeed, which I usually use floats for.
I am noticing smoothness differences in rotations of objects or so I think. Not sure actually.
So yeah is there a performance difference between ints and floats when using it for such things as speed, rotationSpeed, etc?
Thanks.
I doubt that it would make much of a difference for the PC unless you're looking at the memory usage where Ints usually seem to use 2 bytes while floats use 8. The reason why you see a difference in rotation is that int only contains whole numbers while floats also contain fractions which means a more accurate result. Have a look at http://en.wikipedia.org/wiki/Integer_(computer_science) and http://en.wikipedia.org/wiki/Double-precision_floating-point_format
Ints and floats both use 4 bytes. Shorts use 2 bytes, and doubles use 8, but you wouldn't commonly use those in Unity.
Answer by Jamora · Sep 02, 2013 at 12:00 AM
Have a look at what the people at Stack Overflow have to say: floating point vs integer calculations on modern hardware. In short, the general consensus seems to be "it depends".
My personal opinion is that in games, this kind of optimization is useless. Your time would be better spent optimizing your memory allocations (reducing garbage) or moving unnecessary computation out of the Update function.
Totally agree: the difference, if any, is negligible. Anyway, scripts do too little calculation to deserve such optimization. Recalculating matrices every frame (a Unity's job) usually is much heavier, and the real donkey work is done by the GPU: many thousands of float operations per frame. This kind of optimization was very important in the past (when Quake was launched), because GPUs didn't calculate anything.
Answer by AFDozerman · Sep 02, 2013 at 12:15 AM
Generally speaking, it shouldn't be any noticable difference, although it really depends on your processor. An AMD FX series is going to have a faster single threaded floating point performance metric with significantly slower floating point multithreaded metric. Intel's should be more even.