- Home /
Is it better to check before you change variable
instead of updating every frame, is it better to do:
if(valueToChange != targetValue)
valueToChange = targetValue
I am asking this because I wondered if setting a value takes longer than getting and comparing values
In all likelihood, unless you are changing millions of values, you will not notice the slightest difference. This is an example of premature optimisation: http://c2.com/cgi/wiki?PrematureOptimization
Answer by fafase · May 30, 2015 at 08:40 AM
Nope. If you would look at the assembly, you would get something like:
move var1,reg1
move var2, reg2
if reg1 == reg2
move reg2, reg1
move reg1, var1
move reg2, var2
an without the check you get
get var1, reg1
get var2, reg2
move reg2 , reg1
move reg1, var1
move reg2, var2
In the first case you get at best five operation and six at worst while the second case gets 5 operations at all time.
Answer by Eno-Khaon · May 30, 2015 at 08:42 AM
When in doubt, test for yourself!
I would suggest making a loop to try employing in three forms. I can't guarantee how many iterations it should go through, but I'd start with something like 1,000,000 for most operations and work your way up from there.
// C#
float startTimeStamp;
float endTimeStamp;
float timeElapsed;
float valueToChange;
float targetValue;
void Update()
{
if(Input.GetKeyDown(KeyCode.Q))
{
valueToChange = 0.27f;
targetValue = 3.8235f;
startTimeStamp = Time.realtimeSinceStartup;
for(int i = 0; i < 1000000; i++) // increase the count from 1,000,000 as necessary. If there's a huge hiccup in performance, your test conditions are ideal!
{
// Test only one of these at a time
// Version 1 -- boolean test
if(valueToChange != targetValue)
valueToChange = targetValue;
// Version 2 -- no boolean test
valueToChange = targetValue;
// Version 3 -- Empty loop
// Run an empty version as a means of ensuring there's no meaningful overhead for doing so.
// A smart compiling will remove the loop entirely, but there's no harm in covering all your bases.
}
endTimeStamp = Time.realtimeSinceStartup;
timeElapsed = endTimeStamp - startTimeStamp;
Debug.Log("Start Time: " + startTimeStamp);
Debug.Log("End Time: " + endTimeStamp);
Debug.Log("Time Elapsed: " + startTimeStamp);
}
}
Why "Time.realtimeSinceStartup" rather than "Time.time"? That's simple! Time.time has a cap of... hmm... okay, I forget, but it was something like 0.25 seconds. So if something takes longer than that to process, you can no longer track the time taken. That said, realtimeSinceStartup isn't affected by Time.timeScale, so it's not the universal best choice for all situations.
Edit: Ah, right. Forgot the obvious third test case.
The difference would be way too small to be significant. It is even likely that it returns wrong value every now and then due to the fact the test does not allocate a single core. In the process, it is possible the OS will pause that thread and return later giving a wrong result.
True. And your information would certainly be better for this particular scenario, but when you're not familiar with assembly and/or you're working with much larger functions or actions, this is a fairly simple testing environment to set up. As an example, work I did with texture modification (Free version prior to Unity 5) involved learning cutoff points for whether modifying few individual pixels (SetPixel) or wide blocks (SetPixels) was faster to calculate.
Your answer
Follow this Question
Related Questions
Multiple Cars not working 1 Answer
One big script or lots of small ones? 0 Answers
Distribute terrain in zones 3 Answers
Improving script performance 1 Answer
Executing code at runtime 2 Answers