- Home /
How to do step based movement
Hi. I'm looking for some info on how to do some simple step based movement to mimic movement of a sprite on a low resolution screen (84x48). Like in an old snake game (example below).
![alt text][1]
Since the game resolution is not just 84 by 48 and Unity uses units and not pixels, I'm not sure how to make my sprite move like this. Already tried a few things such as a counter to make it move only when it divides by 2/4/8, etc.
if (counter % 4 == 0)
{
transform.Translate(Time.time * speed, 0);
}
Thanks. [1]: /storage/temp/22270-stepmove.png
Answer by bogheorghiu · Feb 26, 2020 at 08:14 PM
@mihaivdev I'm thinking you could map the screen to an array of 84x48 and constantly convert coordinates back and forth between the two systems (unity's Vector3 transform.position and the array).
Answer by KaDaK4 · Feb 26, 2020 at 09:27 PM
You could do something like:
[SerializeField] private float stepInterval;
[SerializeField] private float stepDistance;
private float stepTimer;
void Update()
{
stepTimer += Time.deltaTime;
if(stepTimer >= stepInterval)
{
stepTimer = 0;
transform.position += direction * stepDistance * Time.deltaTime;
}
}
Answer by phosphoer · Feb 26, 2020 at 09:47 PM
A common approach for this sort of thing is to use a combination of scaling and rounding. It also depends on exactly how closely you want to mimic the old school tech. I'll summarize a few ways you can go about this.
Scaling your positions
As you said, you probably don't want to have an actual resolution of 84x48, so some scaling is going to be necessary. For simplicity lets say your game viewport is set to 840x480, a scale factor of 10.
You could store your 'real' position as either an int or float vector, which will be in the range 0-84/48. I'd recommend making this a floating point value so that you can have velocities which slowly accumulate towards moving to the next pixel. All your game logic will work with this real position, and then you'll update the game object transform based on this vector.
// The 'real' position of our object, in float pixel coordinates on our imaginary low-res screen
Vector3 realPosition;
Then, you would display your low-res sprite with a scale of 10 (in this case), and set its transform position every frame based on your 'real' position.
// In Update() for the scaled sprite
Vector3 scaledPos = realPos;
scaledPos.x = Mathf.Floor(scaledPos.x) * 10;
scaledPos.y = Mathf.Floor(scaledPos.y) * 10;
transform.position = scaledPos;
To understand how this works imagine you have a real X position of 44.5, or about halfway across the screen. This code will floor that value (round down), to 44, and then scale by 10, giving you 440. Once you reach 45, the position will jump to 450, skipping 10 units.
There are a few more things to consider, such as making sure you have the right combination of viewport and camera settings such that 1 of your scaled pixels maps to 1 unit in Unity. You'll also want to think about how your physics will be set up, since you are setting transform.position which will be incompatible with any physics being applied to that game object. You may need to separate the game objects used for display with the game objects actually involved in the simulation. So you'd basically have a tiny invisible game playing out in the 0-84 coordinate space, which are setting the positions of your sprite game objects which inhabit the 0-840 coordinate space.
Scale your positions in a shader
You could also approach this by writing a custom shader for your sprites that do the same rounding logic in the vertex shader. This has the advantage that you wouldn't need to really worry about anything to do with how the sprites are displayed or mess around with their positions, you just write your game logic like normal and let the shader handle stepping their movement. You would still have to sync up the scale factor with your viewport settings to make 1 step = 1 'pixel' worth of movement tho.
Scale a RenderTexture
Another pretty robust way to tackle this is to use a RenderTexture. The idea here is basically actually render your game at 84x48, using a tiny RenderTexture, then render that texture to your actual resolution with some scale factor and using point-sampling / nearest neighbor so it looks clean and pixelly. The advantage of this, similar to the shader approach, you can pretty much just not think about the scale factor while making your game. You'd do all your game logic in the tiny-resolution coordinate space, and just scale up the viewport right at the end. If you wanted non-pixely UI or crisp fonts, you could render those with a different camera that has a higher resolution and overlay that on top of your game.
So the high level setup would be as follows. Your game camera is configured with a RenderTexture which has dimensions 84x48. All your sprite assets are authored at native low-resolution, and all your game logic operates in that space. You have a 2nd camera which we'll call the viewport camera, which displays the scaled-up game-texture and also any high resolution UI. This camera renders normally, directly to the display. One way you can render the scaled-up RenderTexture is by using a plane mesh with an unlit texture material that has your game's RenderTexture as its texture. This is maybe a bit complex to set up but should have great results and be easy to work with once done.
Hope this helps!
EDIT: Just noticed this is from 2014...oh well.
Your answer
Follow this Question
Related Questions
Why pixel artifacts on aligned cubes? 1 Answer
Create squares and draw them on terrain 1 Answer
Plane grid system resource demanding 1 Answer
Round position 1 Answer
How to efficiently render a 2D grid? 1 Answer