- Home /
What makes up the binary data of a 16-bit heightmap?
So sometime over a year ago I gained the desire to work on a terrain generator that uses the input of a drawing (to define regions such as mountains, fields, lakes/oceans, etc.) and I kept running into the issue of exporting 16-bit raw heightmaps. No matter where I looked, no matter how many fruitless nights I spent on Google, I didn't turn up a single ounce of information. I eventually realized that Unity probably had a way for me to directly input height through scripts. Lo and behold, it does. I don't know how to do that either, but I'll get to that point eventually. Even still, I finally started designing the algorithms needed to make this happen, and it's coming together well.
At the same time, I would love to have a version that doesn't need Unity. While the standalone version couldn't support automatic placement of trees, grass, and textures like the Unity version will, I think having others be able to use the heightmap generator would be nice. Also, knowing how heightmaps actually work would be a set of knowledge that would certainly help me with this project.
Yet, I haven't found ANYTHING that explains the binary layout of a raw 16-bit image. So I thought, maybe the reason it isn't talked about is because it is stupidly simple? So I opened up Eclipse and quickly made an exported 2 raw files, one that is just a bunch of shorts going from 0 to 16000 over and over for 4097x4097 times, and another that gradually goes from 0 to the 16-bit limit at the end of the file (2048x2048 this time [yes not using the +1 to see what happens]). Both create these maps that constantly jump between 0 and another number between each height point. Messing around with settings completely changes the layout of the map but doesn't make either of them look like they should.
So, does anyone know how 16-bit heightmaps are supposed to be made? Am I just an idiot who managed to spend a year on Google when it could have been found in 5 seconds? I'm really lost.