- Home /
managing floating point rounding
Hello. I am somewhat new to Unity. I have a 2D game (card game). In my code, I instantiate an object, and then use LERP to move the object to space on the screen. Before using LERP, I do some calculations to determine start and end potions. I have to have some degree of accuracy, though. If it helps, I am animating a playing card move from a players position, on a board, to a discard pile. I want to offset each card, so that as a new card is added, it does not completely hide the previous card. So, I add an offset of 0.5. But, between doing the calculations and the LERP function, floating point errors creep in. It doesn't take long for these errors to accumulate enough that the cards stop looking clean. Any suggestions? Here is my code:
Vector3 offsetPos = CardObject.transform.position;
offsetPos.x = discardPileScript.cardHolder.transform.GetChild(0).transform.position.x - 0.5f;
...
while (i < 1.0)
{
i += Time.deltaTime * rate;
CardObject.transform.position = Vector3.Lerp(startPos, offsetPos, i);
yield return null;
}
EDIT:
Edit
I thought maybe better details would help. I have a event thats called, like this:
public void PenalizePlayer()
{
for(int i=0; i < 2; i++)
{
if(PlayerCards.Count > 1)
{
//gets the number of cards in the discard pile. I want to model a player discarding two cards, from the bottom of their deck, and putting them at the bottom of the discard pile.
..code to get values
//just an object that refers to the players hand, create clone
GameObject go = cardImages[0];
GameObject newCard = Instantiate(go, go.transform.position, Quaternion.identity);
//want the cards to lay under the top cards
int sortingId = -(i + numberCardsInDiscardPile);
//this should animate a newly instantiated object (the card) and lerp it to the potion of the players deck
StartCoroutine(Lerp(newCard, go.transform.position, discardPileScript.cardHolder.transform.position, gameMachineScript.timeBetweenCardsDelt, cardToPullFromDeck, true, sortingId));
.. other stuff
}
}
}
private IEnumerator Lerp(GameObject CardObject, Vector3 startPos, Vector3 endPos, float time, Card card, bool isPenalty, int sortingId = 0)
{
float i = 0;
float rate = 1.0f / time;
DiscardPileScript discardPileScript = gameMachineScript.GetComponent<GameMachine>().DiscardPile.GetComponent<DiscardPileScript>();
Vector3 offsetPos = CardObject.transform.position;
Vector3 cardHolderoffsetPos = discardPileScript.cardHolder.transform.position;
int cardCount = discardPileScript.GetCountOfCardsInDiscardPile();
//To track positions of the cards,in the discard pile, I add them to an empty game object, in order. this code gets the first gameobject in that parent, and its position. then adds 0.5f.
float newOffset = discardPileScript.cardHolder.transform.GetChild(0).transform.position.x - 0.5f;
//this debug is where I see the issue. all the game objects thats in the discard pile are in their right potions, per the inspector. The first card has an x position of 0.5
Debug.Log(newOffset);
//when the debug prints out the new offset potion, of the new object, the first item that gets added now has an x value of -1.117587e-08 and the second object has an x value of -0.355666 (in the inspector). But, the debug log says -0.05000001 and -.505666
//The second time this code runs, now using the last added card as its base reference, the next card has, in the inspector, an x value of -0.855666 and the last card x = -0.2341224. Though, in the debug, the values are a-1.055666 and -0.634. It seems like by adding 0.5f gets me values that are pretty far off, in terms of floating point accuracy. I assume its my code?
offsetPos.x = newOffset;
offsetPos.y = endPos.y;
offsetPos.z = endPos.z;
while (i < 1.0)
{
i += Time.deltaTime * rate;
CardObject.transform.position = Vector3.Lerp(startPos, offsetPos, i);
yield return null;
}
CardObject.transform.position = offsetPos;
CardObject.gameObject.transform.GetChild(0).GetComponent<SpriteRenderer>().sortingOrder = sortingId;
}
Answer by Eno-Khaon · Nov 07, 2020 at 04:37 AM
Well, the main thing to consider is properly finalizing the position after the movement's been made:
while(i < 1.0f)
{
// ...
}
// After the loop concludes, finalize the position to the exact intended location
cardObject.transform.position = offsetPos;
Because your card movements were only updating as long as the interpolator was less than 1, they never actually *reached* their destination. Additionally, multiplying by Time.deltaTime (as you should in this situation, don't get me wrong) meant having an arbitrary margin of error in the final position without correcting it afterward.
I thought this as well and tried it. But the error still persists. It appears to be co$$anonymous$$g from the first line, when I subtract 0.5 from the X value in the objects Vector3. While not show, this code is part of an IEnumerator function, and is run through a co-routine. I also tried calculating the value outside the co-routine and passing it in as a parameter. But, again, the error persists. I tried logging the values, as well. And this just adds to the confusion as the Debug value does not match the values I see in Unity's inspector.
Hmm... what are you seeing as Log output vs. the inspector in this situation? Or, more specifically, what value are you logging to test this? At face value, everything seems reasonable unless there's something unclear from the discard-pile's-first-child-transform.
Based on your edit to your question, it sounds to me like at least one GameObject probably has a non-baseline (1, 1, 1) scale. By positioning the cards at specific world locations, their local positions as children of the discard pile would be displayed in the inspector as a combination of local position relative to the scale of the parent.
I can't say I'm aware of what's changing when your previous card reports a new position after adding a new one. However, the initial positions you describe for inspector values sounds like their local positions are being set correctly. Are they being parented after that positioning, so they seem further skewed in the inspector?
As a simple example, I have the following set up as a test case:
A parent object, Base, at position (0.35, 0, 0) with scale (0.5, 0.5, 0.5)
A first child object, Child1 at position (-0.5, 0, 0) (local) (or (0.1, 0, 0) (global)) with scale (1, 1, 1) local
A second object, Child2, which will be made a child through script in Start().
public transform baseT;
public transform child1;
public transform child2;
void Start()
{
child2.position = child1.position + Vector3.left * 0.5f;
Debug.Log("Before: " + child2.localPosition.x);
child2.SetParent(baseT, true);
Debug.Log("After: " + child2.localPosition.x);
}
// Output:
// "Before: -0.4" -- no parent set, current world position
// "After: -1.5" -- parent set, new world position
If the discard pile happens to be near the origin (0, 0, 0) and you're using ___.SetParent(___, false);
(or equivalent), then it would also be easier to overlook positioning problems, but scale changes would wind up more pronounced where applicable.
I just posted a long update (still pending moderators approval). But, in the meantime, I found that using LocalPostition, when assigning an object to a parent, gives better results. So, when I set the objects position, instead of:
float newOffset = discardPileScript.cardHolder.transform.GetChild(0).transform.position.x - 0.5f;
use:
float newOffset = discardPileScript.cardHolder.transform.GetChild(0).transform.localPosition.x - 0.5f;