- Home /
why don't decimals work?
I've recently started using unity, and came up with one of the biggest problems I have faced. Decimals. Whenever I try to write 0.5 instead of 1 it instantly stops working. Why don't decimals work? here is how my code looks, it's really simple. The problem is at spawnObj(grass, x, height +.5); At one it spawns too high, and I need it in the middle. Any tips?
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ProceduralGeneration : MonoBehaviour
{
[SerializeField] int width,height;
[SerializeField] GameObject dirtblock, grassblock, grass;
void Start()
{
Generation();
}
void Generation()
{
for (int x = -100; x < width; x++)
{
int minHeight = height - 1;
int maxHeight = height + 2;
height = Random.Range(minHeight, maxHeight);
for (int y = -10; y < height; y++)
{
spawnObj(dirtblock, x, y);
}
spawnObj(grassblock, x, height);
spawnObj(grass, x, height +.5);
}
}
void spawnObj(GameObject obj,int width,int height)
{
obj = Instantiate(obj, new Vector2(width, height), Quaternion.identity);
obj.transform.parent = this.transform;
}
}
Answer by HicorySauce · Dec 27, 2020 at 10:00 PM
@WolfTraits C# is a strongly typed language, meaning you have to declare what data type a variable is when you declare the variable. You seem to have gotten the hang of doing this in cases of int and probably some other ones, when you declared the maxHeight and minHeight ints.
The main benefit to this is safety - your code won't break quite as easily when it balloons into being much more complex. However, this also means you need to be careful about what kinds of data you are trying to store into your variables. There are two issues with the decimals you are trying to implement. The first is that you are trying to add a decimal to a variable of type int. Ints can only be whole numbers. Integers are an important concept in math as well as programming and I would suggest looking into their basic properties.
The second is that there are different kinds of "decimals" in C#. The most common (and what you should probably be using in this case) is called a float. If you want to declare a float, not an int, you just write float height; instead of int height;. Moreover, if you are putting a value into a float, you type an f after the number, like so: float height = 1.5f;. Floats are best most of the time, but there are also other kinds of fractional numbers, like decimal (for which you would have to type a d after the number) or double.
To expand a little more on this:
The reason why you can use integers (32-bit) in this situation is that they are implicitly converted to float (32-bit) values because they effectively (though not literally) contain less data by not having a decimal point.
By contrast, a double (64-bit) specifically contains more data than a float (32-bit) does, and by non-Unity C# rules, is the preferred/default numeral type when a decimal place is defined.
// No prefixes, but values within range can be implicitly converted
sbyte sb = [-128-127] // Signed, 8-bit
byte b = [0-255] // Unsigned, 8-bit
// No prefixes, but values within range can be implicitly converted
short s = [-32768-32767] // Signed, 16-bit
ushort us = [0-65536] // Unsigned, 16-bit
// Changing $$anonymous$$-max description from here for simplicity
// This can also be applied to byte and short values
// Prefix: none, u -- 205, 205u
int i = [int.$$anonymous$$inValue-int.$$anonymous$$axValue] // Signed, 32-bit
uint ui = [uint.$$anonymous$$inValue-uint.$$anonymous$$axValue] // Unsigned, 32-bit
// Prefix: l, ul -- 205l, 205ul
long l = [long.$$anonymous$$inValue-long.$$anonymous$$axValue] // Signed, 64-bit
ulong ul = [ulong.$$anonymous$$inValue-ulong.$$anonymous$$axValue] // Unsigned, 64-bit
// A few floating point examples, as well
// https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types
// Prefix: f -- 205f, 205.0f
float f = [float.$$anonymous$$inValue-float.$$anonymous$$axValue] // 32-bit
// ~6-9 digit precision
// Prefix: d, none with decimal point -- 205d, 205.0
double d = [double.$$anonymous$$inValue-double.$$anonymous$$axValue] // 64-bit
// ~15-17 digit precision
// Prefix: m -- 205m, 205.0m
decimal m = [decimal.$$anonymous$$inValue-decimal.$$anonymous$$axValue] // 128-bit
// 28-29 digit precision, smaller range than float/double
// "Because the decimal type has more precision and a smaller range than both float and double, it's appropriate for financial and monetary calculations."
I've added some wikipedia links to this answer. In addition if you're new to the concept of floating point numbers in computer science, I highly recommend to watch this Computerphile video
Answer by Skrobie · Dec 28, 2020 at 02:20 AM
Most of Unity depends on floats and integers. you would have to use 0.5f instead of 0.5 - This declares it as a float decimal (popular in Unity) instead of a double decimal (default in C#). There are reasons why floats are better in this environment, as explained in the other answers.
Your answer
Follow this Question
Related Questions
Multiple Cars not working 1 Answer
Distribute terrain in zones 3 Answers
Illuminating a 3D object's edges OnMouseOver (script in c#)? 1 Answer
Double Jump 1 Answer
OnCollisionEnter() not getting called between two rigid bodies 3 Answers